Sep 4 23:48:32.511449 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:03:18 -00 2025 Sep 4 23:48:32.511474 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:48:32.511486 kernel: BIOS-provided physical RAM map: Sep 4 23:48:32.511493 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 23:48:32.511500 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 23:48:32.511506 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 23:48:32.511514 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 4 23:48:32.511521 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 4 23:48:32.511528 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 4 23:48:32.511537 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 4 23:48:32.511544 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 23:48:32.511550 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 23:48:32.511560 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 23:48:32.511567 kernel: NX (Execute Disable) protection: active Sep 4 23:48:32.511575 kernel: APIC: Static calls initialized Sep 4 23:48:32.511587 kernel: SMBIOS 2.8 present. Sep 4 23:48:32.511595 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 23:48:32.511602 kernel: Hypervisor detected: KVM Sep 4 23:48:32.511610 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 23:48:32.511617 kernel: kvm-clock: using sched offset of 4373365460 cycles Sep 4 23:48:32.511625 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 23:48:32.511632 kernel: tsc: Detected 2794.750 MHz processor Sep 4 23:48:32.511652 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 23:48:32.511661 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 23:48:32.511668 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 4 23:48:32.511689 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 23:48:32.511697 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 23:48:32.511705 kernel: Using GB pages for direct mapping Sep 4 23:48:32.511712 kernel: ACPI: Early table checksum verification disabled Sep 4 23:48:32.511719 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 4 23:48:32.511727 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:48:32.511735 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:48:32.511742 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:48:32.511753 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 23:48:32.511761 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:48:32.511768 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:48:32.511775 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:48:32.511783 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 23:48:32.511790 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 4 23:48:32.511798 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 4 23:48:32.511809 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 23:48:32.511819 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 4 23:48:32.511827 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 4 23:48:32.511834 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 4 23:48:32.511842 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 4 23:48:32.511852 kernel: No NUMA configuration found Sep 4 23:48:32.511860 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 4 23:48:32.511870 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 4 23:48:32.511878 kernel: Zone ranges: Sep 4 23:48:32.511885 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 23:48:32.511893 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 4 23:48:32.511900 kernel: Normal empty Sep 4 23:48:32.511908 kernel: Movable zone start for each node Sep 4 23:48:32.511916 kernel: Early memory node ranges Sep 4 23:48:32.511923 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 23:48:32.511931 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 4 23:48:32.511938 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 4 23:48:32.511948 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 23:48:32.511958 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 23:48:32.511966 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 4 23:48:32.511974 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 23:48:32.511981 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 23:48:32.511989 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 23:48:32.511997 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 23:48:32.512004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 23:48:32.512012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 23:48:32.512022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 23:48:32.512029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 23:48:32.512037 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 23:48:32.512045 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 23:48:32.512052 kernel: TSC deadline timer available Sep 4 23:48:32.512060 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 4 23:48:32.512067 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 23:48:32.512075 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 23:48:32.512085 kernel: kvm-guest: setup PV sched yield Sep 4 23:48:32.512114 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 4 23:48:32.512122 kernel: Booting paravirtualized kernel on KVM Sep 4 23:48:32.512130 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 23:48:32.512138 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 23:48:32.512145 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 4 23:48:32.512153 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 4 23:48:32.512160 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 23:48:32.512168 kernel: kvm-guest: PV spinlocks enabled Sep 4 23:48:32.512175 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 23:48:32.512188 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:48:32.512197 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:48:32.512204 kernel: random: crng init done Sep 4 23:48:32.512212 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:48:32.512219 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:48:32.512227 kernel: Fallback order for Node 0: 0 Sep 4 23:48:32.512234 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 4 23:48:32.512242 kernel: Policy zone: DMA32 Sep 4 23:48:32.512249 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:48:32.512260 kernel: Memory: 2432540K/2571752K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 138952K reserved, 0K cma-reserved) Sep 4 23:48:32.512267 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 23:48:32.512275 kernel: ftrace: allocating 37943 entries in 149 pages Sep 4 23:48:32.512282 kernel: ftrace: allocated 149 pages with 4 groups Sep 4 23:48:32.512290 kernel: Dynamic Preempt: voluntary Sep 4 23:48:32.512297 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:48:32.512305 kernel: rcu: RCU event tracing is enabled. Sep 4 23:48:32.512313 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 23:48:32.512324 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:48:32.512331 kernel: Rude variant of Tasks RCU enabled. Sep 4 23:48:32.512339 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:48:32.512347 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:48:32.512357 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 23:48:32.512365 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 23:48:32.512375 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:48:32.512385 kernel: Console: colour VGA+ 80x25 Sep 4 23:48:32.512395 kernel: printk: console [ttyS0] enabled Sep 4 23:48:32.512405 kernel: ACPI: Core revision 20230628 Sep 4 23:48:32.512419 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 23:48:32.512427 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 23:48:32.512435 kernel: x2apic enabled Sep 4 23:48:32.512442 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 23:48:32.512450 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 23:48:32.512458 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 23:48:32.512466 kernel: kvm-guest: setup PV IPIs Sep 4 23:48:32.512483 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 23:48:32.512491 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 4 23:48:32.512499 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 4 23:48:32.512507 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 23:48:32.512517 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 23:48:32.512525 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 23:48:32.512533 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 23:48:32.512541 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 23:48:32.512549 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 23:48:32.512560 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 23:48:32.512568 kernel: active return thunk: retbleed_return_thunk Sep 4 23:48:32.512579 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 23:48:32.512587 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 23:48:32.512595 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 23:48:32.512603 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 23:48:32.512612 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 23:48:32.512620 kernel: active return thunk: srso_return_thunk Sep 4 23:48:32.512630 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 23:48:32.512638 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 23:48:32.512646 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 23:48:32.512654 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 23:48:32.512662 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 23:48:32.512670 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 23:48:32.512678 kernel: Freeing SMP alternatives memory: 32K Sep 4 23:48:32.512686 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:48:32.512694 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:48:32.512704 kernel: landlock: Up and running. Sep 4 23:48:32.512712 kernel: SELinux: Initializing. Sep 4 23:48:32.512720 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:48:32.512728 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:48:32.512736 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 23:48:32.512744 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:48:32.512752 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:48:32.512760 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 23:48:32.512770 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 23:48:32.512780 kernel: ... version: 0 Sep 4 23:48:32.512789 kernel: ... bit width: 48 Sep 4 23:48:32.512797 kernel: ... generic registers: 6 Sep 4 23:48:32.512805 kernel: ... value mask: 0000ffffffffffff Sep 4 23:48:32.512813 kernel: ... max period: 00007fffffffffff Sep 4 23:48:32.512820 kernel: ... fixed-purpose events: 0 Sep 4 23:48:32.512828 kernel: ... event mask: 000000000000003f Sep 4 23:48:32.512836 kernel: signal: max sigframe size: 1776 Sep 4 23:48:32.512844 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:48:32.512854 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:48:32.512862 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:48:32.512870 kernel: smpboot: x86: Booting SMP configuration: Sep 4 23:48:32.512878 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 23:48:32.512885 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 23:48:32.512893 kernel: smpboot: Max logical packages: 1 Sep 4 23:48:32.512901 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 4 23:48:32.512909 kernel: devtmpfs: initialized Sep 4 23:48:32.512917 kernel: x86/mm: Memory block size: 128MB Sep 4 23:48:32.512925 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:48:32.512935 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 23:48:32.512943 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:48:32.512951 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:48:32.512959 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:48:32.512967 kernel: audit: type=2000 audit(1757029710.952:1): state=initialized audit_enabled=0 res=1 Sep 4 23:48:32.512975 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:48:32.512983 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 23:48:32.512990 kernel: cpuidle: using governor menu Sep 4 23:48:32.512998 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:48:32.513009 kernel: dca service started, version 1.12.1 Sep 4 23:48:32.513017 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 4 23:48:32.513036 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 4 23:48:32.513055 kernel: PCI: Using configuration type 1 for base access Sep 4 23:48:32.513073 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 23:48:32.513117 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:48:32.513129 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:48:32.513137 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:48:32.513149 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:48:32.513158 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:48:32.513166 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:48:32.513174 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:48:32.513182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:48:32.513190 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 4 23:48:32.513198 kernel: ACPI: Interpreter enabled Sep 4 23:48:32.513206 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 23:48:32.513214 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 23:48:32.513222 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 23:48:32.513239 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 23:48:32.513247 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 23:48:32.513255 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 23:48:32.513497 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:48:32.513646 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 23:48:32.513784 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 23:48:32.513795 kernel: PCI host bridge to bus 0000:00 Sep 4 23:48:32.513942 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 23:48:32.514075 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 23:48:32.514300 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 23:48:32.514438 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 4 23:48:32.514561 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 4 23:48:32.514681 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 4 23:48:32.514804 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 23:48:32.514978 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 4 23:48:32.515161 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 4 23:48:32.515299 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 4 23:48:32.515431 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 4 23:48:32.515563 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 4 23:48:32.515694 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 23:48:32.515862 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 23:48:32.515996 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 4 23:48:32.516183 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 4 23:48:32.516349 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 23:48:32.516531 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 4 23:48:32.516694 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 4 23:48:32.516853 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 4 23:48:32.517022 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 23:48:32.517276 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 4 23:48:32.517443 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 4 23:48:32.517604 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 4 23:48:32.517770 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 23:48:32.517928 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 4 23:48:32.518149 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 4 23:48:32.518318 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 23:48:32.518492 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 4 23:48:32.518649 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 4 23:48:32.518807 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 4 23:48:32.518980 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 4 23:48:32.519178 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 4 23:48:32.519199 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 23:48:32.519210 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 23:48:32.519221 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 23:48:32.519231 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 23:48:32.519241 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 23:48:32.519252 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 23:48:32.519263 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 23:48:32.519273 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 23:48:32.519287 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 23:48:32.519301 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 23:48:32.519312 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 23:48:32.519322 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 23:48:32.519333 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 23:48:32.519343 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 23:48:32.519354 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 23:48:32.519364 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 23:48:32.519375 kernel: iommu: Default domain type: Translated Sep 4 23:48:32.519385 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 23:48:32.519399 kernel: PCI: Using ACPI for IRQ routing Sep 4 23:48:32.519411 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 23:48:32.519421 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 23:48:32.519432 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 4 23:48:32.519591 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 23:48:32.519745 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 23:48:32.519899 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 23:48:32.519913 kernel: vgaarb: loaded Sep 4 23:48:32.519924 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 23:48:32.519939 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 23:48:32.519950 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 23:48:32.519960 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:48:32.519971 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:48:32.519982 kernel: pnp: PnP ACPI init Sep 4 23:48:32.520276 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 4 23:48:32.520295 kernel: pnp: PnP ACPI: found 6 devices Sep 4 23:48:32.520306 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 23:48:32.520321 kernel: NET: Registered PF_INET protocol family Sep 4 23:48:32.520332 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:48:32.520343 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:48:32.520353 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:48:32.520364 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:48:32.520374 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:48:32.520385 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:48:32.520395 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:48:32.520412 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:48:32.520423 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:48:32.520435 kernel: NET: Registered PF_XDP protocol family Sep 4 23:48:32.520583 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 23:48:32.520724 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 23:48:32.520866 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 23:48:32.521009 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 4 23:48:32.521177 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 4 23:48:32.521319 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 4 23:48:32.521338 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:48:32.521351 kernel: Initialise system trusted keyrings Sep 4 23:48:32.521364 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:48:32.521376 kernel: Key type asymmetric registered Sep 4 23:48:32.521386 kernel: Asymmetric key parser 'x509' registered Sep 4 23:48:32.521397 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 4 23:48:32.521407 kernel: io scheduler mq-deadline registered Sep 4 23:48:32.521418 kernel: io scheduler kyber registered Sep 4 23:48:32.521429 kernel: io scheduler bfq registered Sep 4 23:48:32.521443 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 23:48:32.521455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 23:48:32.521466 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 23:48:32.521477 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 23:48:32.521488 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:48:32.521499 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 23:48:32.521510 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 23:48:32.521520 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 23:48:32.521531 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 23:48:32.521721 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 23:48:32.521737 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 23:48:32.521881 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 23:48:32.522028 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T23:48:31 UTC (1757029711) Sep 4 23:48:32.522241 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 23:48:32.522257 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 23:48:32.522267 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:48:32.522278 kernel: Segment Routing with IPv6 Sep 4 23:48:32.522293 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:48:32.522304 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:48:32.522315 kernel: Key type dns_resolver registered Sep 4 23:48:32.522325 kernel: IPI shorthand broadcast: enabled Sep 4 23:48:32.522336 kernel: sched_clock: Marking stable (1156004143, 421474289)->(1740372768, -162894336) Sep 4 23:48:32.522347 kernel: registered taskstats version 1 Sep 4 23:48:32.522357 kernel: Loading compiled-in X.509 certificates Sep 4 23:48:32.522368 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: f395d469db1520f53594f6c4948c5f8002e6cc8b' Sep 4 23:48:32.522379 kernel: Key type .fscrypt registered Sep 4 23:48:32.522392 kernel: Key type fscrypt-provisioning registered Sep 4 23:48:32.522403 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:48:32.522413 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:48:32.522424 kernel: ima: No architecture policies found Sep 4 23:48:32.522435 kernel: clk: Disabling unused clocks Sep 4 23:48:32.522445 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 4 23:48:32.522456 kernel: Write protecting the kernel read-only data: 38912k Sep 4 23:48:32.522467 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 4 23:48:32.522477 kernel: Run /init as init process Sep 4 23:48:32.522490 kernel: with arguments: Sep 4 23:48:32.522501 kernel: /init Sep 4 23:48:32.522511 kernel: with environment: Sep 4 23:48:32.522521 kernel: HOME=/ Sep 4 23:48:32.522531 kernel: TERM=linux Sep 4 23:48:32.522542 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:48:32.522553 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:48:32.522568 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:48:32.522583 systemd[1]: Detected virtualization kvm. Sep 4 23:48:32.522594 systemd[1]: Detected architecture x86-64. Sep 4 23:48:32.522605 systemd[1]: Running in initrd. Sep 4 23:48:32.522616 systemd[1]: No hostname configured, using default hostname. Sep 4 23:48:32.522627 systemd[1]: Hostname set to . Sep 4 23:48:32.522638 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:48:32.522649 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:48:32.522661 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:48:32.522676 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:48:32.522703 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:48:32.522718 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:48:32.522730 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:48:32.522743 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:48:32.522760 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:48:32.522771 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:48:32.522783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:48:32.522798 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:48:32.522810 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:48:32.522821 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:48:32.522833 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:48:32.522845 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:48:32.522860 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:48:32.522872 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:48:32.522884 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:48:32.522896 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:48:32.522907 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:48:32.522919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:48:32.522931 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:48:32.522943 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:48:32.522957 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:48:32.522969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:48:32.522981 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:48:32.522993 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:48:32.523004 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:48:32.523016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:48:32.523028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:48:32.523039 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:48:32.523051 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:48:32.523066 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:48:32.523131 systemd-journald[194]: Collecting audit messages is disabled. Sep 4 23:48:32.523164 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:48:32.523177 systemd-journald[194]: Journal started Sep 4 23:48:32.523205 systemd-journald[194]: Runtime Journal (/run/log/journal/bcccfdac1987496fa2073a510f2a29bd) is 6M, max 48.4M, 42.3M free. Sep 4 23:48:32.511755 systemd-modules-load[195]: Inserted module 'overlay' Sep 4 23:48:32.546128 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:48:32.549125 kernel: Bridge firewalling registered Sep 4 23:48:32.549140 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 4 23:48:32.553231 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:48:32.554238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:48:32.554928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:48:32.557667 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:48:32.609512 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:48:32.613556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:48:32.616637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:48:32.657674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:48:32.663447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:48:32.666615 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:48:32.672490 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:48:32.694960 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:48:32.703026 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:48:32.750328 dracut-cmdline[231]: dracut-dracut-053 Sep 4 23:48:32.750328 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 4 23:48:32.750937 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:48:32.792737 systemd-resolved[279]: Positive Trust Anchors: Sep 4 23:48:32.792754 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:48:32.792785 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:48:32.795493 systemd-resolved[279]: Defaulting to hostname 'linux'. Sep 4 23:48:32.813708 kernel: SCSI subsystem initialized Sep 4 23:48:32.796803 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:48:32.803495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:48:32.827132 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:48:32.840158 kernel: iscsi: registered transport (tcp) Sep 4 23:48:32.868349 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:48:32.868435 kernel: QLogic iSCSI HBA Driver Sep 4 23:48:32.935950 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:48:32.950392 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:48:32.987291 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:48:32.987392 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:48:32.988442 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:48:33.034137 kernel: raid6: avx2x4 gen() 26674 MB/s Sep 4 23:48:33.064138 kernel: raid6: avx2x2 gen() 21905 MB/s Sep 4 23:48:33.145209 kernel: raid6: avx2x1 gen() 17330 MB/s Sep 4 23:48:33.145304 kernel: raid6: using algorithm avx2x4 gen() 26674 MB/s Sep 4 23:48:33.163384 kernel: raid6: .... xor() 7015 MB/s, rmw enabled Sep 4 23:48:33.163472 kernel: raid6: using avx2x2 recovery algorithm Sep 4 23:48:33.188166 kernel: xor: automatically using best checksumming function avx Sep 4 23:48:33.360173 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:48:33.376538 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:48:33.408295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:48:33.428500 systemd-udevd[416]: Using default interface naming scheme 'v255'. Sep 4 23:48:33.434462 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:48:33.455419 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:48:33.472849 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Sep 4 23:48:33.528948 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:48:33.542538 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:48:33.616524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:48:33.632396 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:48:33.655748 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:48:33.675555 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:48:33.677088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:48:33.678424 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:48:33.691559 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 23:48:33.690406 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:48:33.702751 kernel: AVX2 version of gcm_enc/dec engaged. Sep 4 23:48:33.702784 kernel: AES CTR mode by8 optimization enabled Sep 4 23:48:33.709121 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 23:48:33.712914 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:48:33.727981 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 23:48:33.732171 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:48:33.732205 kernel: GPT:9289727 != 19775487 Sep 4 23:48:33.732216 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:48:33.732227 kernel: GPT:9289727 != 19775487 Sep 4 23:48:33.732498 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:48:33.734143 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:48:33.739892 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:48:33.740114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:48:33.745082 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:48:33.745574 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:48:33.745902 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:48:33.752549 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:48:33.758119 kernel: libata version 3.00 loaded. Sep 4 23:48:33.759979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:48:33.774135 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 23:48:33.776715 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 23:48:33.781395 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 4 23:48:33.781642 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 23:48:33.792137 kernel: BTRFS: device fsid 185ffa67-4184-4488-b7c8-7c0711a63b2d devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (472) Sep 4 23:48:33.792173 kernel: scsi host0: ahci Sep 4 23:48:33.793116 kernel: scsi host1: ahci Sep 4 23:48:33.794210 kernel: scsi host2: ahci Sep 4 23:48:33.799185 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (477) Sep 4 23:48:33.799201 kernel: scsi host3: ahci Sep 4 23:48:33.800117 kernel: scsi host4: ahci Sep 4 23:48:33.802191 kernel: scsi host5: ahci Sep 4 23:48:33.802377 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 4 23:48:33.802390 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 4 23:48:33.802407 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 4 23:48:33.802418 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 4 23:48:33.802428 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 4 23:48:33.802438 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 4 23:48:33.811064 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 23:48:33.838193 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:48:33.851209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 23:48:33.875812 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 23:48:33.877303 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 23:48:33.890384 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:48:33.908475 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:48:33.912483 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:48:33.946125 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:48:34.079174 disk-uuid[560]: Primary Header is updated. Sep 4 23:48:34.079174 disk-uuid[560]: Secondary Entries is updated. Sep 4 23:48:34.079174 disk-uuid[560]: Secondary Header is updated. Sep 4 23:48:34.093505 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:48:34.112458 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 23:48:34.112777 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 23:48:34.116208 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 23:48:34.116294 kernel: ata3.00: applying bridge limits Sep 4 23:48:34.117538 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 23:48:34.119186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:48:34.120705 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 23:48:34.120820 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 23:48:34.122207 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 23:48:34.123129 kernel: ata3.00: configured for UDMA/100 Sep 4 23:48:34.129127 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 23:48:34.193509 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 23:48:34.193948 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 23:48:34.215715 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 23:48:35.179181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 23:48:35.179255 disk-uuid[569]: The operation has completed successfully. Sep 4 23:48:35.221566 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:48:35.221705 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:48:35.291257 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:48:35.311562 sh[593]: Success Sep 4 23:48:35.329141 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 4 23:48:35.371124 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:48:35.385181 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:48:35.387652 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:48:35.439188 kernel: BTRFS info (device dm-0): first mount of filesystem 185ffa67-4184-4488-b7c8-7c0711a63b2d Sep 4 23:48:35.439295 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:48:35.439308 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:48:35.440506 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:48:35.441484 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:48:35.448174 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:48:35.449148 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:48:35.460273 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:48:35.462573 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:48:35.481777 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:48:35.481855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:48:35.481874 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:48:35.485218 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:48:35.491134 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:48:35.590209 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:48:35.606273 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:48:35.634843 systemd-networkd[769]: lo: Link UP Sep 4 23:48:35.634854 systemd-networkd[769]: lo: Gained carrier Sep 4 23:48:35.636774 systemd-networkd[769]: Enumeration completed Sep 4 23:48:35.636893 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:48:35.637357 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:48:35.637362 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:48:35.638465 systemd-networkd[769]: eth0: Link UP Sep 4 23:48:35.638470 systemd-networkd[769]: eth0: Gained carrier Sep 4 23:48:35.638478 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:48:35.639253 systemd[1]: Reached target network.target - Network. Sep 4 23:48:35.664153 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:48:35.819361 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:48:35.832535 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:48:35.938683 systemd-resolved[279]: Detected conflict on linux IN A 10.0.0.65 Sep 4 23:48:35.938707 systemd-resolved[279]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Sep 4 23:48:36.109746 ignition[774]: Ignition 2.20.0 Sep 4 23:48:36.109759 ignition[774]: Stage: fetch-offline Sep 4 23:48:36.109806 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:48:36.109818 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:48:36.109946 ignition[774]: parsed url from cmdline: "" Sep 4 23:48:36.109952 ignition[774]: no config URL provided Sep 4 23:48:36.109958 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:48:36.109970 ignition[774]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:48:36.110003 ignition[774]: op(1): [started] loading QEMU firmware config module Sep 4 23:48:36.110010 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 23:48:36.135379 ignition[774]: op(1): [finished] loading QEMU firmware config module Sep 4 23:48:36.190837 ignition[774]: parsing config with SHA512: 5ad687c11d05d403621787238c97b1d163fd7a339786d9c36193fbafcd8cc187de14c29d8103f42455398986b8537c62cef9a3161de16714d78ab28153e7cd77 Sep 4 23:48:36.202407 unknown[774]: fetched base config from "system" Sep 4 23:48:36.202434 unknown[774]: fetched user config from "qemu" Sep 4 23:48:36.204402 ignition[774]: fetch-offline: fetch-offline passed Sep 4 23:48:36.204583 ignition[774]: Ignition finished successfully Sep 4 23:48:36.209936 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:48:36.212889 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 23:48:36.221886 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:48:36.267393 ignition[784]: Ignition 2.20.0 Sep 4 23:48:36.267410 ignition[784]: Stage: kargs Sep 4 23:48:36.267630 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:48:36.267646 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:48:36.268940 ignition[784]: kargs: kargs passed Sep 4 23:48:36.269001 ignition[784]: Ignition finished successfully Sep 4 23:48:36.274141 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:48:36.288434 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:48:36.326509 ignition[791]: Ignition 2.20.0 Sep 4 23:48:36.326525 ignition[791]: Stage: disks Sep 4 23:48:36.326843 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:48:36.326861 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:48:36.329839 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:48:36.327869 ignition[791]: disks: disks passed Sep 4 23:48:36.333344 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:48:36.327922 ignition[791]: Ignition finished successfully Sep 4 23:48:36.335029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:48:36.336639 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:48:36.337572 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:48:36.338713 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:48:36.351352 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:48:36.388053 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 23:48:36.399706 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:48:36.437027 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:48:36.679158 kernel: EXT4-fs (vda9): mounted filesystem 86dd2c20-900e-43ec-8fda-e9f0f484a013 r/w with ordered data mode. Quota mode: none. Sep 4 23:48:36.679926 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:48:36.682351 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:48:36.693285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:48:36.703709 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:48:36.716695 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (810) Sep 4 23:48:36.716733 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:48:36.716746 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:48:36.716760 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:48:36.707733 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:48:36.707805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:48:36.707841 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:48:36.730723 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:48:36.742965 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:48:36.746465 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:48:36.756414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:48:36.838029 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:48:36.868458 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:48:36.874020 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:48:36.878079 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:48:36.989394 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:48:37.005293 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:48:37.020349 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:48:37.028767 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:48:37.030815 kernel: BTRFS info (device vda6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:48:37.090656 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:48:37.281977 ignition[926]: INFO : Ignition 2.20.0 Sep 4 23:48:37.281977 ignition[926]: INFO : Stage: mount Sep 4 23:48:37.290564 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:48:37.290564 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:48:37.290564 ignition[926]: INFO : mount: mount passed Sep 4 23:48:37.290564 ignition[926]: INFO : Ignition finished successfully Sep 4 23:48:37.299571 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:48:37.325327 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:48:37.535444 systemd-networkd[769]: eth0: Gained IPv6LL Sep 4 23:48:37.690469 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:48:37.702130 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (936) Sep 4 23:48:37.704466 kernel: BTRFS info (device vda6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 4 23:48:37.704505 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 23:48:37.704520 kernel: BTRFS info (device vda6): using free space tree Sep 4 23:48:37.708137 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 23:48:37.710321 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:48:37.733773 ignition[953]: INFO : Ignition 2.20.0 Sep 4 23:48:37.733773 ignition[953]: INFO : Stage: files Sep 4 23:48:37.735843 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:48:37.735843 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:48:37.740797 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:48:37.743341 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:48:37.743341 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:48:37.746939 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:48:37.746939 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:48:37.746939 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:48:37.746515 unknown[953]: wrote ssh authorized keys file for user: core Sep 4 23:48:37.753474 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:48:37.753474 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 23:48:37.793738 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:48:38.843843 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 23:48:38.843843 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:48:38.848457 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 23:48:39.321562 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 23:48:40.046448 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 23:48:40.046448 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 23:48:40.050680 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:48:40.050680 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:48:40.050680 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 23:48:40.050680 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 23:48:40.050680 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:48:40.050680 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 23:48:40.050680 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 23:48:40.050680 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 23:48:40.113114 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:48:40.119888 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 23:48:40.122023 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 23:48:40.122023 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:48:40.156134 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:48:40.157864 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:48:40.159890 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:48:40.161747 ignition[953]: INFO : files: files passed Sep 4 23:48:40.162618 ignition[953]: INFO : Ignition finished successfully Sep 4 23:48:40.166874 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:48:40.179467 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:48:40.181703 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:48:40.184725 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:48:40.184851 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:48:40.193233 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 23:48:40.196151 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:48:40.196151 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:48:40.199710 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:48:40.203075 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:48:40.206074 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:48:40.220426 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:48:40.247903 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:48:40.248084 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:48:40.250749 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:48:40.252922 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:48:40.255191 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:48:40.265351 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:48:40.282853 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:48:40.294266 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:48:40.305346 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:48:40.371575 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:48:40.372071 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:48:40.439874 ignition[1008]: INFO : Ignition 2.20.0 Sep 4 23:48:40.439874 ignition[1008]: INFO : Stage: umount Sep 4 23:48:40.439874 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:48:40.439874 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 23:48:40.439874 ignition[1008]: INFO : umount: umount passed Sep 4 23:48:40.439874 ignition[1008]: INFO : Ignition finished successfully Sep 4 23:48:40.372432 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:48:40.372647 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:48:40.373741 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:48:40.374185 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:48:40.374481 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:48:40.374798 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:48:40.375153 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:48:40.375458 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:48:40.375775 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:48:40.376185 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:48:40.376620 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:48:40.376920 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:48:40.377365 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:48:40.377485 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:48:40.378036 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:48:40.378369 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:48:40.378651 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:48:40.378809 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:48:40.379178 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:48:40.379293 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:48:40.379949 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:48:40.380068 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:48:40.380506 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:48:40.380734 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:48:40.385173 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:48:40.385605 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:48:40.386065 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:48:40.386438 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:48:40.386562 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:48:40.387027 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:48:40.387177 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:48:40.387578 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:48:40.387718 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:48:40.388155 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:48:40.388318 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:48:40.389740 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:48:40.390007 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:48:40.390193 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:48:40.391480 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:48:40.391853 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:48:40.392028 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:48:40.392495 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:48:40.392649 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:48:40.399374 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:48:40.399617 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:48:40.418393 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:48:40.418530 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:48:40.425645 systemd[1]: Stopped target network.target - Network. Sep 4 23:48:40.425982 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:48:40.426051 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:48:40.426343 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:48:40.426395 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:48:40.426662 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:48:40.426713 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:48:40.426979 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:48:40.427031 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:48:40.427610 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:48:40.428127 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:48:40.436601 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:48:40.436775 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:48:40.442411 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:48:40.442729 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:48:40.442786 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:48:40.447393 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:48:40.447738 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:48:40.447882 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:48:40.450640 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:48:40.451862 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:48:40.451929 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:48:40.461268 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:48:40.462677 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:48:40.462772 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:48:40.464949 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:48:40.465020 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:48:40.467406 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:48:40.467478 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:48:40.469738 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:48:40.473434 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:48:40.473561 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:48:40.474500 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:48:40.474651 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:48:40.477706 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:48:40.477782 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:48:40.496594 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:48:40.496821 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:48:40.499244 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:48:40.499414 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:48:40.502384 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:48:40.502465 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:48:40.502987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:48:40.503041 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:48:40.503325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:48:40.503399 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:48:40.504155 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:48:40.504221 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:48:40.504858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:48:40.504937 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:48:40.516521 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:48:40.517811 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:48:40.588277 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 4 23:48:40.517898 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:48:40.520839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:48:40.520913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:48:40.523961 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:48:40.524032 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:48:40.525997 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:48:40.526147 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:48:40.528210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:48:40.530834 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:48:40.545586 systemd[1]: Switching root. Sep 4 23:48:40.602186 systemd-journald[194]: Journal stopped Sep 4 23:48:47.987147 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:48:47.987239 kernel: SELinux: policy capability open_perms=1 Sep 4 23:48:47.987257 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:48:47.987280 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:48:47.987297 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:48:47.987324 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:48:47.987340 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:48:47.987357 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:48:47.987374 kernel: audit: type=1403 audit(1757029725.606:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:48:47.987398 systemd[1]: Successfully loaded SELinux policy in 88.305ms. Sep 4 23:48:47.987425 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.653ms. Sep 4 23:48:47.987445 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:48:47.987465 systemd[1]: Detected virtualization kvm. Sep 4 23:48:47.987482 systemd[1]: Detected architecture x86-64. Sep 4 23:48:47.987503 systemd[1]: Detected first boot. Sep 4 23:48:47.987520 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:48:47.987543 zram_generator::config[1055]: No configuration found. Sep 4 23:48:47.987562 kernel: Guest personality initialized and is inactive Sep 4 23:48:47.987578 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 23:48:47.987595 kernel: Initialized host personality Sep 4 23:48:47.987611 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:48:47.987628 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:48:47.987650 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:48:47.987668 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:48:47.987685 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:48:47.987703 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:48:47.987731 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:48:47.987749 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:48:47.987768 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:48:47.987792 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:48:47.987814 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:48:47.987831 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:48:47.987849 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:48:47.987869 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:48:47.987886 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:48:47.987904 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:48:47.987922 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:48:47.987939 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:48:47.987956 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:48:47.987978 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:48:47.987996 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:48:47.988014 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:48:47.988031 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:48:47.988049 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:48:47.988066 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:48:47.988084 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:48:47.988116 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:48:47.988140 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:48:47.988157 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:48:47.988174 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:48:47.988192 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:48:47.988209 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:48:47.988227 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:48:47.988244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:48:47.988268 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:48:47.988285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:48:47.988307 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:48:47.988324 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:48:47.988342 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:48:47.988359 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:48:47.988377 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:48:47.988395 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:48:47.988412 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:48:47.988429 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:48:47.988447 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:48:47.988469 systemd[1]: Reached target machines.target - Containers. Sep 4 23:48:47.988486 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:48:47.988504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:48:47.988522 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:48:47.988539 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:48:47.988557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:48:47.988575 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:48:47.988592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:48:47.988613 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:48:47.988631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:48:47.988652 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:48:47.988669 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:48:47.988687 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:48:47.988704 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:48:47.988733 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:48:47.988752 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:48:47.988773 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:48:47.988791 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:48:47.988834 systemd-journald[1119]: Collecting audit messages is disabled. Sep 4 23:48:47.988870 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:48:47.988891 systemd-journald[1119]: Journal started Sep 4 23:48:47.988922 systemd-journald[1119]: Runtime Journal (/run/log/journal/bcccfdac1987496fa2073a510f2a29bd) is 6M, max 48.4M, 42.3M free. Sep 4 23:48:47.050675 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:48:47.069626 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 23:48:47.070234 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:48:47.994316 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:48:48.051405 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:48:48.051511 kernel: loop: module loaded Sep 4 23:48:48.125348 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:48:48.129254 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:48:48.129304 systemd[1]: Stopped verity-setup.service. Sep 4 23:48:48.133119 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:48:48.140145 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:48:48.143339 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:48:48.144238 kernel: ACPI: bus type drm_connector registered Sep 4 23:48:48.144329 kernel: fuse: init (API version 7.39) Sep 4 23:48:48.145621 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:48:48.147344 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:48:48.148585 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:48:48.150149 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:48:48.151552 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:48:48.205566 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:48:48.207561 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:48:48.207864 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:48:48.209645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:48:48.210067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:48:48.211821 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:48:48.212188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:48:48.214054 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:48:48.214415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:48:48.216378 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:48:48.217147 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:48:48.219263 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:48:48.219507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:48:48.221797 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:48:48.224589 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:48:48.226555 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:48:48.228305 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:48:48.342167 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:48:48.364908 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:48:48.377183 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:48:48.379871 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:48:48.381212 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:48:48.381266 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:48:48.383768 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:48:48.386715 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:48:48.419075 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:48:48.421279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:48:48.503327 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:48:48.506255 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:48:48.507780 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:48:48.509811 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:48:48.567391 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:48:48.569139 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:48:48.576021 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:48:48.580778 systemd-journald[1119]: Time spent on flushing to /var/log/journal/bcccfdac1987496fa2073a510f2a29bd is 66.978ms for 965 entries. Sep 4 23:48:48.580778 systemd-journald[1119]: System Journal (/var/log/journal/bcccfdac1987496fa2073a510f2a29bd) is 8M, max 195.6M, 187.6M free. Sep 4 23:48:49.382482 systemd-journald[1119]: Received client request to flush runtime journal. Sep 4 23:48:49.382612 kernel: loop0: detected capacity change from 0 to 224512 Sep 4 23:48:49.382645 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:48:49.383002 kernel: loop1: detected capacity change from 0 to 147912 Sep 4 23:48:48.583326 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:48:48.587776 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:48:48.590359 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:48:48.592254 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:48:48.604053 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 23:48:48.677389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:48:48.918625 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:48:48.957212 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:48:48.960659 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:48:48.968379 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:48:48.976330 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:48:49.184811 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:48:49.206456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:48:49.337597 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 4 23:48:49.337612 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Sep 4 23:48:49.344237 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:48:49.384564 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:48:49.429161 kernel: loop2: detected capacity change from 0 to 138176 Sep 4 23:48:49.572145 kernel: loop3: detected capacity change from 0 to 224512 Sep 4 23:48:49.651125 kernel: loop4: detected capacity change from 0 to 147912 Sep 4 23:48:49.804128 kernel: loop5: detected capacity change from 0 to 138176 Sep 4 23:48:49.822746 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 23:48:49.823673 (sd-merge)[1200]: Merged extensions into '/usr'. Sep 4 23:48:50.211139 systemd[1]: Reload requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:48:50.211429 systemd[1]: Reloading... Sep 4 23:48:50.351346 zram_generator::config[1228]: No configuration found. Sep 4 23:48:50.533978 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:48:50.575496 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:48:50.607994 systemd[1]: Reloading finished in 395 ms. Sep 4 23:48:50.626624 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:48:50.652597 systemd[1]: Starting ensure-sysext.service... Sep 4 23:48:50.655953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:48:50.839904 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:48:50.840301 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:48:50.841326 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:48:50.841622 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 4 23:48:50.841726 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 4 23:48:50.846328 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:48:50.846341 systemd-tmpfiles[1266]: Skipping /boot Sep 4 23:48:50.847453 systemd[1]: Reload requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:48:50.847479 systemd[1]: Reloading... Sep 4 23:48:50.909655 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:48:50.909681 systemd-tmpfiles[1266]: Skipping /boot Sep 4 23:48:50.936147 zram_generator::config[1297]: No configuration found. Sep 4 23:48:51.049199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:48:51.118790 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:48:51.119209 systemd[1]: Reloading finished in 271 ms. Sep 4 23:48:51.136076 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:48:51.167025 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:48:51.214531 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:48:51.256900 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:48:51.260412 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:48:51.264345 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:48:51.268562 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:48:51.273212 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:48:51.273436 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:48:51.274932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:48:51.345997 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:48:51.356485 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:48:51.358190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:48:51.358347 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:48:51.358508 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:48:51.360710 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:48:51.384700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:48:51.385051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:48:51.388049 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:48:51.388401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:48:51.391379 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:48:51.394060 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:48:51.396914 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:48:51.397247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:48:51.405178 augenrules[1363]: No rules Sep 4 23:48:51.439727 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:48:51.440160 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:48:51.464045 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:48:51.470594 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:48:51.510281 systemd[1]: Finished ensure-sysext.service. Sep 4 23:48:51.513085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:48:51.524538 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:48:51.542228 augenrules[1378]: /sbin/augenrules: No change Sep 4 23:48:51.545707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:48:51.547699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:48:51.552322 augenrules[1397]: No rules Sep 4 23:48:51.550722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:48:51.553483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:48:51.557380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:48:51.593820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:48:51.593874 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:48:51.596472 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 23:48:51.599347 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:48:51.604265 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:48:51.627557 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:48:51.628629 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:48:51.628663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 23:48:51.629805 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:48:51.630212 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:48:51.651557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:48:51.651825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:48:51.653546 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:48:51.653834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:48:51.655410 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:48:51.655654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:48:51.657661 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:48:51.657890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:48:51.659537 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:48:51.672135 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:48:51.672268 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:48:51.677731 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Sep 4 23:48:51.701235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:48:51.739291 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:48:51.758947 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:48:51.765771 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:48:51.790133 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1427) Sep 4 23:48:51.878678 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 23:48:51.882123 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 4 23:48:51.890533 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:48:51.891134 kernel: ACPI: button: Power Button [PWRF] Sep 4 23:48:51.916418 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:48:51.923134 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 23:48:51.944130 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 23:48:51.947332 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 4 23:48:51.948686 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 23:48:51.978137 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 23:48:51.985469 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:48:51.986949 systemd-networkd[1431]: lo: Link UP Sep 4 23:48:51.986968 systemd-networkd[1431]: lo: Gained carrier Sep 4 23:48:51.992847 systemd-networkd[1431]: Enumeration completed Sep 4 23:48:51.993067 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:48:51.996591 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:48:51.996602 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:48:51.997353 systemd-networkd[1431]: eth0: Link UP Sep 4 23:48:51.997358 systemd-networkd[1431]: eth0: Gained carrier Sep 4 23:48:51.997371 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:48:52.002892 systemd-resolved[1344]: Positive Trust Anchors: Sep 4 23:48:52.002923 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:48:52.002959 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:48:52.008808 systemd-resolved[1344]: Defaulting to hostname 'linux'. Sep 4 23:48:52.021910 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:48:52.033321 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:48:52.035226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:48:52.062917 systemd-networkd[1431]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 23:48:52.064803 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Sep 4 23:48:52.926644 systemd-resolved[1344]: Clock change detected. Flushing caches. Sep 4 23:48:52.926708 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 23:48:52.926779 systemd-timesyncd[1405]: Initial clock synchronization to Thu 2025-09-04 23:48:52.926581 UTC. Sep 4 23:48:52.943062 systemd[1]: Reached target network.target - Network. Sep 4 23:48:52.947307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:48:52.959710 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:48:52.989089 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 23:48:52.996266 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:48:53.004153 kernel: kvm_amd: TSC scaling supported Sep 4 23:48:53.004297 kernel: kvm_amd: Nested Virtualization enabled Sep 4 23:48:53.004315 kernel: kvm_amd: Nested Paging enabled Sep 4 23:48:53.004331 kernel: kvm_amd: LBR virtualization supported Sep 4 23:48:53.004345 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 23:48:53.005656 kernel: kvm_amd: Virtual GIF supported Sep 4 23:48:53.032089 kernel: EDAC MC: Ver: 3.0.0 Sep 4 23:48:53.073632 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:48:53.126794 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:48:53.138177 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:48:53.162303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:48:53.196864 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:48:53.199815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:48:53.201148 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:48:53.202514 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:48:53.204735 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:48:53.207142 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:48:53.209305 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:48:53.212360 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:48:53.214050 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:48:53.214095 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:48:53.215163 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:48:53.217808 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:48:53.221242 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:48:53.226113 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:48:53.227909 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:48:53.229301 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:48:53.237525 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:48:53.239489 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:48:53.242672 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:48:53.244681 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:48:53.246683 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:48:53.247762 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:48:53.248974 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:48:53.249016 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:48:53.250933 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:48:53.255126 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:48:53.257209 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:48:53.259292 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:48:53.265314 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:48:53.266568 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:48:53.268934 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:48:53.273286 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:48:53.278265 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:48:53.281301 jq[1476]: false Sep 4 23:48:53.283887 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:48:53.290286 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:48:53.293197 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:48:53.294058 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:48:53.306240 dbus-daemon[1475]: [system] SELinux support is enabled Sep 4 23:48:53.308246 extend-filesystems[1477]: Found loop3 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found loop4 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found loop5 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found sr0 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found vda Sep 4 23:48:53.308246 extend-filesystems[1477]: Found vda1 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found vda2 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found vda3 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found usr Sep 4 23:48:53.308246 extend-filesystems[1477]: Found vda4 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found vda6 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found vda7 Sep 4 23:48:53.308246 extend-filesystems[1477]: Found vda9 Sep 4 23:48:53.308246 extend-filesystems[1477]: Checking size of /dev/vda9 Sep 4 23:48:53.306483 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:48:53.311987 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:48:53.339629 update_engine[1489]: I20250904 23:48:53.320622 1489 main.cc:92] Flatcar Update Engine starting Sep 4 23:48:53.339629 update_engine[1489]: I20250904 23:48:53.321866 1489 update_check_scheduler.cc:74] Next update check in 6m33s Sep 4 23:48:53.315346 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:48:53.320608 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:48:53.340333 jq[1494]: true Sep 4 23:48:53.334490 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:48:53.334935 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:48:53.335484 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:48:53.335843 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:48:53.342308 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:48:53.342794 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:48:53.362928 extend-filesystems[1477]: Resized partition /dev/vda9 Sep 4 23:48:53.365664 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:48:53.370746 extend-filesystems[1508]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:48:53.486801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1429) Sep 4 23:48:53.486870 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:48:53.487001 jq[1498]: true Sep 4 23:48:53.487727 tar[1497]: linux-amd64/LICENSE Sep 4 23:48:53.487970 tar[1497]: linux-amd64/helm Sep 4 23:48:53.508395 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:48:53.520215 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:48:53.529216 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:48:53.530281 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:48:53.530317 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:48:53.531656 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:48:53.531687 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:48:53.534659 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:48:53.538315 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:48:53.538747 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:48:53.672628 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:48:53.694939 systemd-logind[1485]: Watching system buttons on /dev/input/event1 (Power Button) Sep 4 23:48:53.699113 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 23:48:53.699242 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 23:48:53.702228 systemd-logind[1485]: New seat seat0. Sep 4 23:48:53.704162 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:48:53.946605 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:48:53.977294 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:48:53.989473 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:48:53.992650 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:48:53.993928 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:48:53.996949 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:45680.service - OpenSSH per-connection server daemon (10.0.0.1:45680). Sep 4 23:48:54.256536 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 23:48:54.358857 systemd-networkd[1431]: eth0: Gained IPv6LL Sep 4 23:48:54.364070 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:48:54.367023 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:48:54.369249 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:48:54.378793 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 23:48:54.383738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:48:54.387629 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:48:54.414136 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 23:48:54.414431 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 23:48:54.416093 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:48:54.567934 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:48:54.604996 extend-filesystems[1508]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 23:48:54.604996 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 23:48:54.604996 extend-filesystems[1508]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 23:48:54.612642 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Sep 4 23:48:54.607141 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:48:54.607527 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:48:54.984638 bash[1542]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:48:54.985052 containerd[1500]: time="2025-09-04T23:48:54.984671478Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:48:54.988345 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:48:55.000983 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 23:48:55.001612 sshd[1547]: Connection closed by authenticating user core 10.0.0.1 port 45680 [preauth] Sep 4 23:48:55.005187 systemd[1]: sshd@0-10.0.0.65:22-10.0.0.1:45680.service: Deactivated successfully. Sep 4 23:48:55.020667 containerd[1500]: time="2025-09-04T23:48:55.020286796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.022887863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.022923741Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.022943377Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.023181865Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.023201501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.023281231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.023300537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.023649942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.023666734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.023681521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024054 containerd[1500]: time="2025-09-04T23:48:55.023693033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024394 containerd[1500]: time="2025-09-04T23:48:55.023799773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024394 containerd[1500]: time="2025-09-04T23:48:55.024113311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024394 containerd[1500]: time="2025-09-04T23:48:55.024295443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:48:55.024394 containerd[1500]: time="2025-09-04T23:48:55.024311513Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:48:55.024685 containerd[1500]: time="2025-09-04T23:48:55.024650448Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:48:55.024878 containerd[1500]: time="2025-09-04T23:48:55.024855142Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:48:55.155231 containerd[1500]: time="2025-09-04T23:48:55.155145737Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:48:55.155382 containerd[1500]: time="2025-09-04T23:48:55.155274759Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:48:55.155382 containerd[1500]: time="2025-09-04T23:48:55.155306759Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:48:55.155382 containerd[1500]: time="2025-09-04T23:48:55.155328189Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:48:55.155382 containerd[1500]: time="2025-09-04T23:48:55.155350260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:48:55.155761 containerd[1500]: time="2025-09-04T23:48:55.155732808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:48:55.156175 containerd[1500]: time="2025-09-04T23:48:55.156152424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:48:55.156367 containerd[1500]: time="2025-09-04T23:48:55.156308788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:48:55.156367 containerd[1500]: time="2025-09-04T23:48:55.156351177Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:48:55.156367 containerd[1500]: time="2025-09-04T23:48:55.156374671Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:48:55.156367 containerd[1500]: time="2025-09-04T23:48:55.156389589Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156407292Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156429704Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156450343Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156468547Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156485128Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156513952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156530533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156557744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156574987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156592449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156613389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156629579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156645379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.156684 containerd[1500]: time="2025-09-04T23:48:55.156658684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156675525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156691014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156716071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156731600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156744605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156761036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156784540Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156814045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156828652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156840775Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156919312Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156943678Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:48:55.157375 containerd[1500]: time="2025-09-04T23:48:55.156956392Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:48:55.157801 containerd[1500]: time="2025-09-04T23:48:55.157087117Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:48:55.157801 containerd[1500]: time="2025-09-04T23:48:55.157106143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157801 containerd[1500]: time="2025-09-04T23:48:55.157125238Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:48:55.157801 containerd[1500]: time="2025-09-04T23:48:55.157141549Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:48:55.157801 containerd[1500]: time="2025-09-04T23:48:55.157154163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:48:55.157962 containerd[1500]: time="2025-09-04T23:48:55.157636417Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:48:55.157962 containerd[1500]: time="2025-09-04T23:48:55.157695307Z" level=info msg="Connect containerd service" Sep 4 23:48:55.157962 containerd[1500]: time="2025-09-04T23:48:55.157739560Z" level=info msg="using legacy CRI server" Sep 4 23:48:55.157962 containerd[1500]: time="2025-09-04T23:48:55.157748978Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:48:55.157962 containerd[1500]: time="2025-09-04T23:48:55.157939165Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:48:55.158858 containerd[1500]: time="2025-09-04T23:48:55.158798846Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:48:55.159370 containerd[1500]: time="2025-09-04T23:48:55.159275650Z" level=info msg="Start subscribing containerd event" Sep 4 23:48:55.159453 containerd[1500]: time="2025-09-04T23:48:55.159392379Z" level=info msg="Start recovering state" Sep 4 23:48:55.159505 containerd[1500]: time="2025-09-04T23:48:55.159453414Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:48:55.159540 containerd[1500]: time="2025-09-04T23:48:55.159486205Z" level=info msg="Start event monitor" Sep 4 23:48:55.159572 containerd[1500]: time="2025-09-04T23:48:55.159561396Z" level=info msg="Start snapshots syncer" Sep 4 23:48:55.159614 containerd[1500]: time="2025-09-04T23:48:55.159575893Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:48:55.159614 containerd[1500]: time="2025-09-04T23:48:55.159597774Z" level=info msg="Start streaming server" Sep 4 23:48:55.159946 containerd[1500]: time="2025-09-04T23:48:55.159579721Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:48:55.159946 containerd[1500]: time="2025-09-04T23:48:55.159801406Z" level=info msg="containerd successfully booted in 0.177103s" Sep 4 23:48:55.163792 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:48:55.305708 tar[1497]: linux-amd64/README.md Sep 4 23:48:55.322654 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:48:56.591276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:48:56.593217 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:48:56.594591 systemd[1]: Startup finished in 1.396s (kernel) + 13.562s (initrd) + 10.214s (userspace) = 25.172s. Sep 4 23:48:56.624563 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:48:58.221426 kubelet[1595]: E0904 23:48:58.221349 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:48:58.225993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:48:58.226230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:48:58.226725 systemd[1]: kubelet.service: Consumed 2.041s CPU time, 265.2M memory peak. Sep 4 23:49:05.025570 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:41708.service - OpenSSH per-connection server daemon (10.0.0.1:41708). Sep 4 23:49:05.065807 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 41708 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:49:05.067869 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:05.075415 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:49:05.084669 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:49:05.091653 systemd-logind[1485]: New session 1 of user core. Sep 4 23:49:05.098914 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:49:05.114366 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:49:05.117798 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:49:05.120979 systemd-logind[1485]: New session c1 of user core. Sep 4 23:49:05.293224 systemd[1612]: Queued start job for default target default.target. Sep 4 23:49:05.305990 systemd[1612]: Created slice app.slice - User Application Slice. Sep 4 23:49:05.306023 systemd[1612]: Reached target paths.target - Paths. Sep 4 23:49:05.306091 systemd[1612]: Reached target timers.target - Timers. Sep 4 23:49:05.308141 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:49:05.320489 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:49:05.320634 systemd[1612]: Reached target sockets.target - Sockets. Sep 4 23:49:05.320682 systemd[1612]: Reached target basic.target - Basic System. Sep 4 23:49:05.320729 systemd[1612]: Reached target default.target - Main User Target. Sep 4 23:49:05.320764 systemd[1612]: Startup finished in 190ms. Sep 4 23:49:05.321195 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:49:05.323216 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:49:05.401524 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:41714.service - OpenSSH per-connection server daemon (10.0.0.1:41714). Sep 4 23:49:05.441901 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 41714 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:49:05.443577 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:05.448167 systemd-logind[1485]: New session 2 of user core. Sep 4 23:49:05.458200 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:49:05.514174 sshd[1625]: Connection closed by 10.0.0.1 port 41714 Sep 4 23:49:05.514686 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:05.525123 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:41714.service: Deactivated successfully. Sep 4 23:49:05.527669 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:49:05.529627 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:49:05.537765 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:41724.service - OpenSSH per-connection server daemon (10.0.0.1:41724). Sep 4 23:49:05.539172 systemd-logind[1485]: Removed session 2. Sep 4 23:49:05.575761 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 41724 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:49:05.577319 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:05.582060 systemd-logind[1485]: New session 3 of user core. Sep 4 23:49:05.595243 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:49:05.646912 sshd[1633]: Connection closed by 10.0.0.1 port 41724 Sep 4 23:49:05.647355 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:05.660087 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:41724.service: Deactivated successfully. Sep 4 23:49:05.662141 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:49:05.664128 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:49:05.672307 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:41736.service - OpenSSH per-connection server daemon (10.0.0.1:41736). Sep 4 23:49:05.673495 systemd-logind[1485]: Removed session 3. Sep 4 23:49:05.715853 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 41736 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:49:05.717875 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:05.724091 systemd-logind[1485]: New session 4 of user core. Sep 4 23:49:05.738376 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:49:05.795354 sshd[1641]: Connection closed by 10.0.0.1 port 41736 Sep 4 23:49:05.795787 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:05.820794 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:41736.service: Deactivated successfully. Sep 4 23:49:05.823212 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:49:05.824018 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:49:05.833473 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:41752.service - OpenSSH per-connection server daemon (10.0.0.1:41752). Sep 4 23:49:05.834777 systemd-logind[1485]: Removed session 4. Sep 4 23:49:05.872890 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 41752 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:49:05.875129 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:05.880576 systemd-logind[1485]: New session 5 of user core. Sep 4 23:49:05.898472 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:49:05.959330 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:49:05.959676 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:49:05.976526 sudo[1650]: pam_unix(sudo:session): session closed for user root Sep 4 23:49:05.978359 sshd[1649]: Connection closed by 10.0.0.1 port 41752 Sep 4 23:49:05.978901 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:05.994875 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:41752.service: Deactivated successfully. Sep 4 23:49:05.996664 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:49:05.998586 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:49:06.012351 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:41766.service - OpenSSH per-connection server daemon (10.0.0.1:41766). Sep 4 23:49:06.013728 systemd-logind[1485]: Removed session 5. Sep 4 23:49:06.056630 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 41766 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:49:06.058671 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:06.064626 systemd-logind[1485]: New session 6 of user core. Sep 4 23:49:06.078387 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:49:06.135142 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:49:06.135496 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:49:06.140504 sudo[1660]: pam_unix(sudo:session): session closed for user root Sep 4 23:49:06.149145 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:49:06.149610 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:49:06.169434 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:49:06.205309 augenrules[1682]: No rules Sep 4 23:49:06.207797 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:49:06.208191 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:49:06.209683 sudo[1659]: pam_unix(sudo:session): session closed for user root Sep 4 23:49:06.211449 sshd[1658]: Connection closed by 10.0.0.1 port 41766 Sep 4 23:49:06.211873 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Sep 4 23:49:06.224054 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:41766.service: Deactivated successfully. Sep 4 23:49:06.226298 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:49:06.227926 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:49:06.238296 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:41772.service - OpenSSH per-connection server daemon (10.0.0.1:41772). Sep 4 23:49:06.239555 systemd-logind[1485]: Removed session 6. Sep 4 23:49:06.275694 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 41772 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:49:06.277354 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:49:06.282090 systemd-logind[1485]: New session 7 of user core. Sep 4 23:49:06.293176 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:49:06.348155 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:49:06.348520 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:49:07.177423 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:49:07.177579 (dockerd)[1714]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:49:08.165441 dockerd[1714]: time="2025-09-04T23:49:08.165345278Z" level=info msg="Starting up" Sep 4 23:49:08.268709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:49:08.279503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:49:08.752106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:08.756756 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:49:09.100195 kubelet[1744]: E0904 23:49:09.099972 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:49:09.109576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:49:09.109856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:49:09.110363 systemd[1]: kubelet.service: Consumed 406ms CPU time, 112.5M memory peak. Sep 4 23:49:11.814608 dockerd[1714]: time="2025-09-04T23:49:11.814495203Z" level=info msg="Loading containers: start." Sep 4 23:49:12.371107 kernel: Initializing XFRM netlink socket Sep 4 23:49:12.494169 systemd-networkd[1431]: docker0: Link UP Sep 4 23:49:12.548416 dockerd[1714]: time="2025-09-04T23:49:12.548355931Z" level=info msg="Loading containers: done." Sep 4 23:49:12.568095 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1245409512-merged.mount: Deactivated successfully. Sep 4 23:49:12.591073 dockerd[1714]: time="2025-09-04T23:49:12.590971746Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:49:12.591276 dockerd[1714]: time="2025-09-04T23:49:12.591160029Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:49:12.591391 dockerd[1714]: time="2025-09-04T23:49:12.591357670Z" level=info msg="Daemon has completed initialization" Sep 4 23:49:12.863653 dockerd[1714]: time="2025-09-04T23:49:12.863519483Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:49:12.864288 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:49:14.297096 containerd[1500]: time="2025-09-04T23:49:14.296986193Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 23:49:15.034841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2982911732.mount: Deactivated successfully. Sep 4 23:49:19.268894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:49:19.283283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:49:19.472096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:19.477134 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:49:19.683484 kubelet[1952]: E0904 23:49:19.683291 1952 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:49:19.688664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:49:19.688957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:49:19.689417 systemd[1]: kubelet.service: Consumed 241ms CPU time, 110.6M memory peak. Sep 4 23:49:23.696996 containerd[1500]: time="2025-09-04T23:49:23.696905178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:23.755092 containerd[1500]: time="2025-09-04T23:49:23.754930376Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 4 23:49:23.795771 containerd[1500]: time="2025-09-04T23:49:23.795666225Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:23.808117 containerd[1500]: time="2025-09-04T23:49:23.807996074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:23.809624 containerd[1500]: time="2025-09-04T23:49:23.809545920Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 9.512452506s" Sep 4 23:49:23.809624 containerd[1500]: time="2025-09-04T23:49:23.809624567Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 23:49:23.810805 containerd[1500]: time="2025-09-04T23:49:23.810751169Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 23:49:27.046357 containerd[1500]: time="2025-09-04T23:49:27.046226853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:27.047142 containerd[1500]: time="2025-09-04T23:49:27.046970157Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 4 23:49:27.048652 containerd[1500]: time="2025-09-04T23:49:27.048598794Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:27.054071 containerd[1500]: time="2025-09-04T23:49:27.053994323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:27.055974 containerd[1500]: time="2025-09-04T23:49:27.055872650Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 3.245056528s" Sep 4 23:49:27.056143 containerd[1500]: time="2025-09-04T23:49:27.056007849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 23:49:27.057482 containerd[1500]: time="2025-09-04T23:49:27.057406817Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 23:49:29.557917 containerd[1500]: time="2025-09-04T23:49:29.557776976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:29.561533 containerd[1500]: time="2025-09-04T23:49:29.561414873Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 4 23:49:29.564983 containerd[1500]: time="2025-09-04T23:49:29.564827760Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:29.574744 containerd[1500]: time="2025-09-04T23:49:29.574664665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:29.575912 containerd[1500]: time="2025-09-04T23:49:29.575858505Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.518399709s" Sep 4 23:49:29.575912 containerd[1500]: time="2025-09-04T23:49:29.575909172Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 23:49:29.576666 containerd[1500]: time="2025-09-04T23:49:29.576622624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 23:49:29.769322 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 23:49:29.785400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:49:30.087087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:30.092107 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:49:30.479519 kubelet[2016]: E0904 23:49:30.479316 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:49:30.485157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:49:30.485396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:49:30.485834 systemd[1]: kubelet.service: Consumed 362ms CPU time, 112.5M memory peak. Sep 4 23:49:32.366901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494487155.mount: Deactivated successfully. Sep 4 23:49:33.517432 containerd[1500]: time="2025-09-04T23:49:33.517322884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:33.518561 containerd[1500]: time="2025-09-04T23:49:33.518437664Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 4 23:49:33.520513 containerd[1500]: time="2025-09-04T23:49:33.520438999Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:33.530348 containerd[1500]: time="2025-09-04T23:49:33.530174614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:33.531160 containerd[1500]: time="2025-09-04T23:49:33.531120844Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 3.954444448s" Sep 4 23:49:33.531225 containerd[1500]: time="2025-09-04T23:49:33.531163565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 23:49:33.531754 containerd[1500]: time="2025-09-04T23:49:33.531724041Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 23:49:34.147262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362329274.mount: Deactivated successfully. Sep 4 23:49:36.030660 containerd[1500]: time="2025-09-04T23:49:36.030554824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:36.032153 containerd[1500]: time="2025-09-04T23:49:36.032025224Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 23:49:36.033624 containerd[1500]: time="2025-09-04T23:49:36.033577639Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:36.037812 containerd[1500]: time="2025-09-04T23:49:36.037706691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:36.039239 containerd[1500]: time="2025-09-04T23:49:36.039175869Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.507415148s" Sep 4 23:49:36.039239 containerd[1500]: time="2025-09-04T23:49:36.039220875Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 23:49:36.039894 containerd[1500]: time="2025-09-04T23:49:36.039844988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:49:36.579208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2422382038.mount: Deactivated successfully. Sep 4 23:49:36.588274 containerd[1500]: time="2025-09-04T23:49:36.588199909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:36.589302 containerd[1500]: time="2025-09-04T23:49:36.589203513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 23:49:36.590740 containerd[1500]: time="2025-09-04T23:49:36.590664584Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:36.594076 containerd[1500]: time="2025-09-04T23:49:36.594014019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:36.594831 containerd[1500]: time="2025-09-04T23:49:36.594787526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 554.896871ms" Sep 4 23:49:36.594831 containerd[1500]: time="2025-09-04T23:49:36.594828253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 23:49:36.595558 containerd[1500]: time="2025-09-04T23:49:36.595511930Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 23:49:37.326024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838015975.mount: Deactivated successfully. Sep 4 23:49:38.444365 update_engine[1489]: I20250904 23:49:38.443972 1489 update_attempter.cc:509] Updating boot flags... Sep 4 23:49:38.488499 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2136) Sep 4 23:49:38.572096 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2130) Sep 4 23:49:40.518782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 23:49:40.547393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:49:40.789351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:40.794118 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:49:41.436079 kubelet[2167]: E0904 23:49:41.435978 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:49:41.437596 containerd[1500]: time="2025-09-04T23:49:41.437520250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:41.439145 containerd[1500]: time="2025-09-04T23:49:41.439073998Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 4 23:49:41.441173 containerd[1500]: time="2025-09-04T23:49:41.441109457Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:41.442169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:49:41.442469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:49:41.443246 systemd[1]: kubelet.service: Consumed 369ms CPU time, 109.5M memory peak. Sep 4 23:49:41.445153 containerd[1500]: time="2025-09-04T23:49:41.445117686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:49:41.446804 containerd[1500]: time="2025-09-04T23:49:41.446775551Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.851216562s" Sep 4 23:49:41.446865 containerd[1500]: time="2025-09-04T23:49:41.446805067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 23:49:44.298292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:44.298524 systemd[1]: kubelet.service: Consumed 369ms CPU time, 109.5M memory peak. Sep 4 23:49:44.312540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:49:44.347193 systemd[1]: Reload requested from client PID 2203 ('systemctl') (unit session-7.scope)... Sep 4 23:49:44.347217 systemd[1]: Reloading... Sep 4 23:49:44.480879 zram_generator::config[2257]: No configuration found. Sep 4 23:49:44.837633 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:49:44.971203 systemd[1]: Reloading finished in 623 ms. Sep 4 23:49:45.053016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:45.058630 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:49:45.063175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:49:45.065592 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:49:45.066086 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:45.066172 systemd[1]: kubelet.service: Consumed 189ms CPU time, 100.1M memory peak. Sep 4 23:49:45.080437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:49:45.287798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:49:45.295911 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:49:45.381956 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:49:45.381956 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:49:45.381956 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:49:45.382424 kubelet[2303]: I0904 23:49:45.382016 2303 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:49:45.659358 kubelet[2303]: I0904 23:49:45.659207 2303 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:49:45.659358 kubelet[2303]: I0904 23:49:45.659245 2303 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:49:45.659717 kubelet[2303]: I0904 23:49:45.659560 2303 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:49:45.700945 kubelet[2303]: E0904 23:49:45.700855 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:45.702589 kubelet[2303]: I0904 23:49:45.702550 2303 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:49:45.726393 kubelet[2303]: E0904 23:49:45.726338 2303 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:49:45.726393 kubelet[2303]: I0904 23:49:45.726382 2303 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:49:45.732420 kubelet[2303]: I0904 23:49:45.732370 2303 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:49:45.732868 kubelet[2303]: I0904 23:49:45.732823 2303 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:49:45.733157 kubelet[2303]: I0904 23:49:45.732870 2303 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:49:45.733287 kubelet[2303]: I0904 23:49:45.733185 2303 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:49:45.733287 kubelet[2303]: I0904 23:49:45.733197 2303 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:49:45.733436 kubelet[2303]: I0904 23:49:45.733418 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:49:45.774396 kubelet[2303]: I0904 23:49:45.774271 2303 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:49:45.774396 kubelet[2303]: I0904 23:49:45.774353 2303 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:49:45.774396 kubelet[2303]: I0904 23:49:45.774405 2303 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:49:45.774396 kubelet[2303]: I0904 23:49:45.774442 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:49:45.786974 kubelet[2303]: I0904 23:49:45.786909 2303 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:49:45.787481 kubelet[2303]: I0904 23:49:45.787425 2303 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:49:45.821286 kubelet[2303]: W0904 23:49:45.814722 2303 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:49:45.822227 kubelet[2303]: W0904 23:49:45.822149 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:45.822300 kubelet[2303]: E0904 23:49:45.822236 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:45.822726 kubelet[2303]: W0904 23:49:45.822637 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:45.822762 kubelet[2303]: E0904 23:49:45.822725 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:45.833318 kubelet[2303]: I0904 23:49:45.833247 2303 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:49:45.833415 kubelet[2303]: I0904 23:49:45.833359 2303 server.go:1287] "Started kubelet" Sep 4 23:49:45.833524 kubelet[2303]: I0904 23:49:45.833473 2303 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:49:45.834954 kubelet[2303]: I0904 23:49:45.834916 2303 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:49:45.836006 kubelet[2303]: I0904 23:49:45.835958 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:49:45.841899 kubelet[2303]: I0904 23:49:45.841764 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:49:45.842256 kubelet[2303]: I0904 23:49:45.842220 2303 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:49:45.846075 kubelet[2303]: I0904 23:49:45.844749 2303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:49:45.846338 kubelet[2303]: I0904 23:49:45.846304 2303 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:49:45.846642 kubelet[2303]: E0904 23:49:45.846603 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:45.847774 kubelet[2303]: I0904 23:49:45.847713 2303 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:49:45.847967 kubelet[2303]: I0904 23:49:45.847918 2303 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:49:45.848886 kubelet[2303]: E0904 23:49:45.848626 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Sep 4 23:49:45.848886 kubelet[2303]: I0904 23:49:45.848757 2303 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:49:45.848886 kubelet[2303]: I0904 23:49:45.848804 2303 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:49:45.849570 kubelet[2303]: W0904 23:49:45.849495 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:45.851971 kubelet[2303]: E0904 23:49:45.849543 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:45.852901 kubelet[2303]: E0904 23:49:45.852853 2303 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:49:45.853432 kubelet[2303]: I0904 23:49:45.853397 2303 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:49:45.877209 kubelet[2303]: E0904 23:49:45.852144 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623947f45febbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 23:49:45.833302973 +0000 UTC m=+0.531824184,LastTimestamp:2025-09-04 23:49:45.833302973 +0000 UTC m=+0.531824184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 23:49:45.881075 kubelet[2303]: I0904 23:49:45.879937 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:49:45.882204 kubelet[2303]: I0904 23:49:45.882153 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:49:45.882204 kubelet[2303]: I0904 23:49:45.882211 2303 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:49:45.882371 kubelet[2303]: I0904 23:49:45.882248 2303 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:49:45.882371 kubelet[2303]: I0904 23:49:45.882262 2303 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:49:45.882371 kubelet[2303]: E0904 23:49:45.882334 2303 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:49:45.885202 kubelet[2303]: W0904 23:49:45.884942 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:45.885278 kubelet[2303]: E0904 23:49:45.885211 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:45.885847 kubelet[2303]: I0904 23:49:45.885787 2303 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:49:45.885847 kubelet[2303]: I0904 23:49:45.885840 2303 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:49:45.885942 kubelet[2303]: I0904 23:49:45.885860 2303 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:49:45.947536 kubelet[2303]: E0904 23:49:45.947318 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:45.983463 kubelet[2303]: E0904 23:49:45.983312 2303 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:49:46.047762 kubelet[2303]: E0904 23:49:46.047655 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:46.049451 kubelet[2303]: E0904 23:49:46.049389 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Sep 4 23:49:46.147880 kubelet[2303]: E0904 23:49:46.147793 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:46.183893 kubelet[2303]: E0904 23:49:46.183769 2303 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:49:46.248488 kubelet[2303]: E0904 23:49:46.248280 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:46.348863 kubelet[2303]: E0904 23:49:46.348771 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:46.450031 kubelet[2303]: E0904 23:49:46.449928 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:46.450625 kubelet[2303]: E0904 23:49:46.450396 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Sep 4 23:49:46.550175 kubelet[2303]: E0904 23:49:46.550087 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:46.584703 kubelet[2303]: E0904 23:49:46.584583 2303 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:49:46.650381 kubelet[2303]: E0904 23:49:46.650286 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:46.731524 kubelet[2303]: W0904 23:49:46.731404 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:46.731524 kubelet[2303]: E0904 23:49:46.731519 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:46.750486 kubelet[2303]: E0904 23:49:46.750398 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:46.828310 kubelet[2303]: W0904 23:49:46.828137 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:46.828310 kubelet[2303]: E0904 23:49:46.828230 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:46.850974 kubelet[2303]: E0904 23:49:46.850922 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:46.951401 kubelet[2303]: E0904 23:49:46.951307 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.052520 kubelet[2303]: E0904 23:49:47.052425 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.100518 kubelet[2303]: W0904 23:49:47.100360 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:47.100518 kubelet[2303]: E0904 23:49:47.100450 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:47.153697 kubelet[2303]: E0904 23:49:47.153595 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.237999 kubelet[2303]: W0904 23:49:47.237923 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:47.237999 kubelet[2303]: E0904 23:49:47.237979 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:47.251353 kubelet[2303]: E0904 23:49:47.251301 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Sep 4 23:49:47.254468 kubelet[2303]: E0904 23:49:47.254397 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.354912 kubelet[2303]: E0904 23:49:47.354733 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.385133 kubelet[2303]: E0904 23:49:47.385085 2303 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:49:47.456011 kubelet[2303]: E0904 23:49:47.455823 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.557019 kubelet[2303]: E0904 23:49:47.556924 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.658114 kubelet[2303]: E0904 23:49:47.657857 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.758488 kubelet[2303]: E0904 23:49:47.758370 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.859139 kubelet[2303]: E0904 23:49:47.859061 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:47.883398 kubelet[2303]: E0904 23:49:47.883348 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:47.959598 kubelet[2303]: E0904 23:49:47.959353 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:48.060425 kubelet[2303]: E0904 23:49:48.060316 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:48.160556 kubelet[2303]: E0904 23:49:48.160485 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:48.218475 kubelet[2303]: I0904 23:49:48.218265 2303 policy_none.go:49] "None policy: Start" Sep 4 23:49:48.218475 kubelet[2303]: I0904 23:49:48.218353 2303 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:49:48.218475 kubelet[2303]: I0904 23:49:48.218388 2303 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:49:48.261470 kubelet[2303]: E0904 23:49:48.261381 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:48.330860 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:49:48.357607 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:49:48.361525 kubelet[2303]: E0904 23:49:48.361488 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:49:48.362549 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:49:48.374970 kubelet[2303]: I0904 23:49:48.374788 2303 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:49:48.375262 kubelet[2303]: I0904 23:49:48.375163 2303 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:49:48.375262 kubelet[2303]: I0904 23:49:48.375183 2303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:49:48.376296 kubelet[2303]: I0904 23:49:48.375582 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:49:48.378426 kubelet[2303]: E0904 23:49:48.378394 2303 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:49:48.378581 kubelet[2303]: E0904 23:49:48.378453 2303 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 23:49:48.477286 kubelet[2303]: I0904 23:49:48.477081 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:49:48.477915 kubelet[2303]: E0904 23:49:48.477632 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Sep 4 23:49:48.679723 kubelet[2303]: I0904 23:49:48.679671 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:49:48.680102 kubelet[2303]: E0904 23:49:48.680052 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Sep 4 23:49:48.852561 kubelet[2303]: E0904 23:49:48.852485 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="3.2s" Sep 4 23:49:48.996952 systemd[1]: Created slice kubepods-burstable-pod9de77078d0fffcc316044a0a28af34ac.slice - libcontainer container kubepods-burstable-pod9de77078d0fffcc316044a0a28af34ac.slice. Sep 4 23:49:49.013333 kubelet[2303]: E0904 23:49:49.013273 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:49.017889 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 4 23:49:49.030941 kubelet[2303]: E0904 23:49:49.030899 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:49.036563 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 4 23:49:49.039103 kubelet[2303]: E0904 23:49:49.039053 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:49.069783 kubelet[2303]: I0904 23:49:49.069683 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9de77078d0fffcc316044a0a28af34ac-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9de77078d0fffcc316044a0a28af34ac\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:49.069783 kubelet[2303]: I0904 23:49:49.069750 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9de77078d0fffcc316044a0a28af34ac-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9de77078d0fffcc316044a0a28af34ac\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:49.069783 kubelet[2303]: I0904 23:49:49.069794 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:49.069783 kubelet[2303]: I0904 23:49:49.069809 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:49.070146 kubelet[2303]: I0904 23:49:49.069829 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:49:49.070146 kubelet[2303]: I0904 23:49:49.069845 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9de77078d0fffcc316044a0a28af34ac-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9de77078d0fffcc316044a0a28af34ac\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:49:49.070146 kubelet[2303]: I0904 23:49:49.069863 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:49.070146 kubelet[2303]: I0904 23:49:49.069950 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:49.070146 kubelet[2303]: I0904 23:49:49.070022 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:49:49.082273 kubelet[2303]: I0904 23:49:49.082228 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:49:49.082738 kubelet[2303]: E0904 23:49:49.082697 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Sep 4 23:49:49.234125 kubelet[2303]: W0904 23:49:49.233919 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:49.234125 kubelet[2303]: E0904 23:49:49.233980 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:49.314884 kubelet[2303]: E0904 23:49:49.314742 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:49.315838 containerd[1500]: time="2025-09-04T23:49:49.315786480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9de77078d0fffcc316044a0a28af34ac,Namespace:kube-system,Attempt:0,}" Sep 4 23:49:49.332536 kubelet[2303]: E0904 23:49:49.332431 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:49.333190 containerd[1500]: time="2025-09-04T23:49:49.333139913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 4 23:49:49.339853 kubelet[2303]: E0904 23:49:49.339786 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:49.340415 containerd[1500]: time="2025-09-04T23:49:49.340365370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 4 23:49:49.390537 kubelet[2303]: W0904 23:49:49.390367 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:49.390537 kubelet[2303]: E0904 23:49:49.390439 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:49.644941 kubelet[2303]: W0904 23:49:49.644810 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:49.644941 kubelet[2303]: E0904 23:49:49.644878 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:49.650845 kubelet[2303]: W0904 23:49:49.650769 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:49.650845 kubelet[2303]: E0904 23:49:49.650833 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:49.884661 kubelet[2303]: I0904 23:49:49.884591 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:49:49.885717 kubelet[2303]: E0904 23:49:49.884957 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Sep 4 23:49:50.754988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979553698.mount: Deactivated successfully. Sep 4 23:49:50.944819 containerd[1500]: time="2025-09-04T23:49:50.944736726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:49:50.949600 containerd[1500]: time="2025-09-04T23:49:50.949501914Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:49:50.953709 containerd[1500]: time="2025-09-04T23:49:50.953599052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 4 23:49:50.955415 containerd[1500]: time="2025-09-04T23:49:50.955302722Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:49:50.961499 containerd[1500]: time="2025-09-04T23:49:50.961338453Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:49:50.966647 containerd[1500]: time="2025-09-04T23:49:50.966402002Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:49:50.969754 containerd[1500]: time="2025-09-04T23:49:50.969619864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:49:50.972888 containerd[1500]: time="2025-09-04T23:49:50.972710045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:49:50.974056 containerd[1500]: time="2025-09-04T23:49:50.973939612Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.65800863s" Sep 4 23:49:50.975677 containerd[1500]: time="2025-09-04T23:49:50.975595071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.635105227s" Sep 4 23:49:50.976309 containerd[1500]: time="2025-09-04T23:49:50.976273960Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.642994063s" Sep 4 23:49:51.487293 kubelet[2303]: I0904 23:49:51.487244 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:49:51.487834 kubelet[2303]: E0904 23:49:51.487777 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Sep 4 23:49:51.853501 containerd[1500]: time="2025-09-04T23:49:51.851312442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:51.853714 containerd[1500]: time="2025-09-04T23:49:51.853538825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:51.853714 containerd[1500]: time="2025-09-04T23:49:51.853607415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:51.853981 containerd[1500]: time="2025-09-04T23:49:51.853902941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:51.880169 containerd[1500]: time="2025-09-04T23:49:51.879678165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:51.880169 containerd[1500]: time="2025-09-04T23:49:51.879743579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:51.880169 containerd[1500]: time="2025-09-04T23:49:51.879757485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:51.880169 containerd[1500]: time="2025-09-04T23:49:51.879841573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:51.887315 containerd[1500]: time="2025-09-04T23:49:51.884550111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:49:51.887315 containerd[1500]: time="2025-09-04T23:49:51.884613110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:49:51.887315 containerd[1500]: time="2025-09-04T23:49:51.884625052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:51.887315 containerd[1500]: time="2025-09-04T23:49:51.884721975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:49:51.909389 systemd[1]: Started cri-containerd-aebfd2813551b0d9f61be830fcd9c50c22cf0e4573c27b1222b6dd05144630e3.scope - libcontainer container aebfd2813551b0d9f61be830fcd9c50c22cf0e4573c27b1222b6dd05144630e3. Sep 4 23:49:51.932484 systemd[1]: Started cri-containerd-18ca9196c59cd95d39bd89e4cbc180b6d1f6ad375fbff65535fb04e2504c742c.scope - libcontainer container 18ca9196c59cd95d39bd89e4cbc180b6d1f6ad375fbff65535fb04e2504c742c. Sep 4 23:49:51.937127 systemd[1]: Started cri-containerd-3c1c2f93c188d0cddb2777f8f19504b6847a74381691d3a559942288facc954c.scope - libcontainer container 3c1c2f93c188d0cddb2777f8f19504b6847a74381691d3a559942288facc954c. Sep 4 23:49:52.025103 containerd[1500]: time="2025-09-04T23:49:52.023369721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"aebfd2813551b0d9f61be830fcd9c50c22cf0e4573c27b1222b6dd05144630e3\"" Sep 4 23:49:52.025617 kubelet[2303]: E0904 23:49:52.025280 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:52.031572 containerd[1500]: time="2025-09-04T23:49:52.031522502Z" level=info msg="CreateContainer within sandbox \"aebfd2813551b0d9f61be830fcd9c50c22cf0e4573c27b1222b6dd05144630e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:49:52.051681 containerd[1500]: time="2025-09-04T23:49:52.050815147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9de77078d0fffcc316044a0a28af34ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"18ca9196c59cd95d39bd89e4cbc180b6d1f6ad375fbff65535fb04e2504c742c\"" Sep 4 23:49:52.051941 kubelet[2303]: E0904 23:49:52.051882 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:52.054356 containerd[1500]: time="2025-09-04T23:49:52.054210429Z" level=info msg="CreateContainer within sandbox \"18ca9196c59cd95d39bd89e4cbc180b6d1f6ad375fbff65535fb04e2504c742c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:49:52.054547 kubelet[2303]: E0904 23:49:52.054423 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="6.4s" Sep 4 23:49:52.056629 containerd[1500]: time="2025-09-04T23:49:52.056558381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c1c2f93c188d0cddb2777f8f19504b6847a74381691d3a559942288facc954c\"" Sep 4 23:49:52.058177 kubelet[2303]: E0904 23:49:52.057758 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:52.060200 containerd[1500]: time="2025-09-04T23:49:52.060157768Z" level=info msg="CreateContainer within sandbox \"3c1c2f93c188d0cddb2777f8f19504b6847a74381691d3a559942288facc954c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:49:52.282326 kubelet[2303]: E0904 23:49:52.282266 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:52.491668 containerd[1500]: time="2025-09-04T23:49:52.491563332Z" level=info msg="CreateContainer within sandbox \"aebfd2813551b0d9f61be830fcd9c50c22cf0e4573c27b1222b6dd05144630e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8a85a1710902b4d18a5db196075832c03d605689919d265a4dad7499f7374047\"" Sep 4 23:49:52.493073 containerd[1500]: time="2025-09-04T23:49:52.492764965Z" level=info msg="StartContainer for \"8a85a1710902b4d18a5db196075832c03d605689919d265a4dad7499f7374047\"" Sep 4 23:49:52.532418 systemd[1]: Started cri-containerd-8a85a1710902b4d18a5db196075832c03d605689919d265a4dad7499f7374047.scope - libcontainer container 8a85a1710902b4d18a5db196075832c03d605689919d265a4dad7499f7374047. Sep 4 23:49:52.948754 kubelet[2303]: W0904 23:49:52.948592 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:52.948754 kubelet[2303]: E0904 23:49:52.948645 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:53.026543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3854104332.mount: Deactivated successfully. Sep 4 23:49:53.246540 containerd[1500]: time="2025-09-04T23:49:53.246180285Z" level=info msg="StartContainer for \"8a85a1710902b4d18a5db196075832c03d605689919d265a4dad7499f7374047\" returns successfully" Sep 4 23:49:53.251943 kubelet[2303]: E0904 23:49:53.251889 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:53.252129 kubelet[2303]: E0904 23:49:53.252096 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:53.468574 kubelet[2303]: W0904 23:49:53.468484 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:53.468574 kubelet[2303]: E0904 23:49:53.468544 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:54.184376 kubelet[2303]: W0904 23:49:54.184310 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:54.184376 kubelet[2303]: E0904 23:49:54.184365 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:54.254982 kubelet[2303]: E0904 23:49:54.254903 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:54.255173 kubelet[2303]: E0904 23:49:54.255094 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:54.660064 kubelet[2303]: E0904 23:49:54.659853 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623947f45febbd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 23:49:45.833302973 +0000 UTC m=+0.531824184,LastTimestamp:2025-09-04 23:49:45.833302973 +0000 UTC m=+0.531824184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 23:49:54.674578 kubelet[2303]: W0904 23:49:54.674477 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Sep 4 23:49:54.674709 kubelet[2303]: E0904 23:49:54.674586 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Sep 4 23:49:54.689394 kubelet[2303]: I0904 23:49:54.689350 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:49:54.689804 kubelet[2303]: E0904 23:49:54.689751 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Sep 4 23:49:54.877348 containerd[1500]: time="2025-09-04T23:49:54.877262692Z" level=info msg="CreateContainer within sandbox \"18ca9196c59cd95d39bd89e4cbc180b6d1f6ad375fbff65535fb04e2504c742c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ff802da1883f56c5cc11eccc0345e9dd1691e7b3544662105687338f5f4580dd\"" Sep 4 23:49:54.877974 containerd[1500]: time="2025-09-04T23:49:54.877847222Z" level=info msg="StartContainer for \"ff802da1883f56c5cc11eccc0345e9dd1691e7b3544662105687338f5f4580dd\"" Sep 4 23:49:54.921225 systemd[1]: Started cri-containerd-ff802da1883f56c5cc11eccc0345e9dd1691e7b3544662105687338f5f4580dd.scope - libcontainer container ff802da1883f56c5cc11eccc0345e9dd1691e7b3544662105687338f5f4580dd. Sep 4 23:49:55.321745 containerd[1500]: time="2025-09-04T23:49:55.321569386Z" level=info msg="StartContainer for \"ff802da1883f56c5cc11eccc0345e9dd1691e7b3544662105687338f5f4580dd\" returns successfully" Sep 4 23:49:55.321745 containerd[1500]: time="2025-09-04T23:49:55.321671567Z" level=info msg="CreateContainer within sandbox \"3c1c2f93c188d0cddb2777f8f19504b6847a74381691d3a559942288facc954c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"935a52dde96b3addcb651d7c150cb9024469a85d23885292d0ea1f7ee7edfd55\"" Sep 4 23:49:55.323715 containerd[1500]: time="2025-09-04T23:49:55.322311551Z" level=info msg="StartContainer for \"935a52dde96b3addcb651d7c150cb9024469a85d23885292d0ea1f7ee7edfd55\"" Sep 4 23:49:55.327059 kubelet[2303]: E0904 23:49:55.326997 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:55.327473 kubelet[2303]: E0904 23:49:55.327161 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:55.382320 systemd[1]: Started cri-containerd-935a52dde96b3addcb651d7c150cb9024469a85d23885292d0ea1f7ee7edfd55.scope - libcontainer container 935a52dde96b3addcb651d7c150cb9024469a85d23885292d0ea1f7ee7edfd55. Sep 4 23:49:55.610925 containerd[1500]: time="2025-09-04T23:49:55.610748127Z" level=info msg="StartContainer for \"935a52dde96b3addcb651d7c150cb9024469a85d23885292d0ea1f7ee7edfd55\" returns successfully" Sep 4 23:49:56.334711 kubelet[2303]: E0904 23:49:56.334663 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:56.335285 kubelet[2303]: E0904 23:49:56.334830 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:56.335375 kubelet[2303]: E0904 23:49:56.335344 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:56.335496 kubelet[2303]: E0904 23:49:56.335467 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:57.107361 kubelet[2303]: E0904 23:49:57.107308 2303 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 4 23:49:57.336191 kubelet[2303]: E0904 23:49:57.336144 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:57.336738 kubelet[2303]: E0904 23:49:57.336271 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:57.336738 kubelet[2303]: E0904 23:49:57.336328 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:57.336738 kubelet[2303]: E0904 23:49:57.336446 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:57.785992 kubelet[2303]: E0904 23:49:57.785929 2303 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 4 23:49:58.337295 kubelet[2303]: E0904 23:49:58.337260 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:58.337748 kubelet[2303]: E0904 23:49:58.337359 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:58.337748 kubelet[2303]: E0904 23:49:58.337396 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:58.337748 kubelet[2303]: E0904 23:49:58.337462 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:49:58.378701 kubelet[2303]: E0904 23:49:58.378650 2303 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 23:49:58.693240 kubelet[2303]: E0904 23:49:58.693078 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 23:49:58.979813 kubelet[2303]: E0904 23:49:58.979648 2303 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 4 23:49:59.969981 kubelet[2303]: E0904 23:49:59.969930 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 23:49:59.970674 kubelet[2303]: E0904 23:49:59.970104 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:00.079331 kubelet[2303]: E0904 23:50:00.079272 2303 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 4 23:50:01.092152 kubelet[2303]: I0904 23:50:01.092095 2303 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:50:01.345760 kubelet[2303]: I0904 23:50:01.345619 2303 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:50:01.347719 kubelet[2303]: I0904 23:50:01.347682 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:50:01.416571 kubelet[2303]: I0904 23:50:01.416502 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:50:01.744313 kubelet[2303]: I0904 23:50:01.744131 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:50:01.787331 kubelet[2303]: I0904 23:50:01.787271 2303 apiserver.go:52] "Watching apiserver" Sep 4 23:50:01.789926 kubelet[2303]: E0904 23:50:01.789899 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:01.790098 kubelet[2303]: E0904 23:50:01.789898 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:01.849223 kubelet[2303]: I0904 23:50:01.849143 2303 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:50:02.506263 kubelet[2303]: E0904 23:50:02.506211 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:05.938603 kubelet[2303]: I0904 23:50:05.937667 2303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.937639275 podStartE2EDuration="4.937639275s" podCreationTimestamp="2025-09-04 23:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:05.937002068 +0000 UTC m=+20.635523299" watchObservedRunningTime="2025-09-04 23:50:05.937639275 +0000 UTC m=+20.636160486" Sep 4 23:50:05.989211 kubelet[2303]: I0904 23:50:05.988644 2303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.988618775 podStartE2EDuration="4.988618775s" podCreationTimestamp="2025-09-04 23:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:05.963567959 +0000 UTC m=+20.662089170" watchObservedRunningTime="2025-09-04 23:50:05.988618775 +0000 UTC m=+20.687139986" Sep 4 23:50:06.481884 kubelet[2303]: E0904 23:50:06.481830 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:06.506009 kubelet[2303]: I0904 23:50:06.505919 2303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.505889917 podStartE2EDuration="5.505889917s" podCreationTimestamp="2025-09-04 23:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:05.988901826 +0000 UTC m=+20.687423038" watchObservedRunningTime="2025-09-04 23:50:06.505889917 +0000 UTC m=+21.204411128" Sep 4 23:50:06.597513 kernel: hrtimer: interrupt took 10725935 ns Sep 4 23:50:06.619752 kubelet[2303]: E0904 23:50:06.619376 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:07.358160 kubelet[2303]: E0904 23:50:07.358086 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:09.282865 systemd[1]: Reload requested from client PID 2586 ('systemctl') (unit session-7.scope)... Sep 4 23:50:09.282887 systemd[1]: Reloading... Sep 4 23:50:09.445099 zram_generator::config[2636]: No configuration found. Sep 4 23:50:09.559468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:50:09.703666 systemd[1]: Reloading finished in 420 ms. Sep 4 23:50:09.731409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:09.751020 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:50:09.751507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:09.751577 systemd[1]: kubelet.service: Consumed 1.689s CPU time, 136.5M memory peak. Sep 4 23:50:09.760573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:50:10.012543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:50:10.017166 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:50:10.062875 kubelet[2675]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:50:10.062875 kubelet[2675]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:50:10.062875 kubelet[2675]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:50:10.063413 kubelet[2675]: I0904 23:50:10.062931 2675 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:50:10.074146 kubelet[2675]: I0904 23:50:10.074098 2675 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 23:50:10.075020 kubelet[2675]: I0904 23:50:10.074460 2675 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:50:10.075020 kubelet[2675]: I0904 23:50:10.074751 2675 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 23:50:10.076238 kubelet[2675]: I0904 23:50:10.076211 2675 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 23:50:10.078648 kubelet[2675]: I0904 23:50:10.078589 2675 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:50:10.082149 kubelet[2675]: E0904 23:50:10.082118 2675 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:50:10.082271 kubelet[2675]: I0904 23:50:10.082242 2675 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:50:10.091539 kubelet[2675]: I0904 23:50:10.091466 2675 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:50:10.091808 kubelet[2675]: I0904 23:50:10.091768 2675 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:50:10.092053 kubelet[2675]: I0904 23:50:10.091806 2675 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:50:10.092179 kubelet[2675]: I0904 23:50:10.092066 2675 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:50:10.092179 kubelet[2675]: I0904 23:50:10.092082 2675 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 23:50:10.092179 kubelet[2675]: I0904 23:50:10.092151 2675 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:50:10.092423 kubelet[2675]: I0904 23:50:10.092388 2675 kubelet.go:446] "Attempting to sync node with API server" Sep 4 23:50:10.092479 kubelet[2675]: I0904 23:50:10.092429 2675 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:50:10.092479 kubelet[2675]: I0904 23:50:10.092451 2675 kubelet.go:352] "Adding apiserver pod source" Sep 4 23:50:10.092479 kubelet[2675]: I0904 23:50:10.092466 2675 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:50:10.095082 kubelet[2675]: I0904 23:50:10.093570 2675 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:50:10.095082 kubelet[2675]: I0904 23:50:10.094084 2675 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 23:50:10.095082 kubelet[2675]: I0904 23:50:10.094676 2675 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:50:10.095082 kubelet[2675]: I0904 23:50:10.094714 2675 server.go:1287] "Started kubelet" Sep 4 23:50:10.096599 kubelet[2675]: I0904 23:50:10.095948 2675 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:50:10.097163 kubelet[2675]: I0904 23:50:10.097146 2675 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:50:10.097354 kubelet[2675]: I0904 23:50:10.097307 2675 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:50:10.097435 kubelet[2675]: I0904 23:50:10.097314 2675 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:50:10.099196 kubelet[2675]: I0904 23:50:10.099181 2675 server.go:479] "Adding debug handlers to kubelet server" Sep 4 23:50:10.101253 kubelet[2675]: I0904 23:50:10.101231 2675 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:50:10.108106 kubelet[2675]: I0904 23:50:10.108063 2675 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:50:10.108387 kubelet[2675]: E0904 23:50:10.108357 2675 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 23:50:10.109201 kubelet[2675]: E0904 23:50:10.109172 2675 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:50:10.109500 kubelet[2675]: I0904 23:50:10.109479 2675 factory.go:221] Registration of the systemd container factory successfully Sep 4 23:50:10.109770 kubelet[2675]: I0904 23:50:10.109743 2675 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:50:10.111522 kubelet[2675]: I0904 23:50:10.111492 2675 factory.go:221] Registration of the containerd container factory successfully Sep 4 23:50:10.112064 kubelet[2675]: I0904 23:50:10.111907 2675 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:50:10.112424 kubelet[2675]: I0904 23:50:10.112242 2675 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:50:10.114772 kubelet[2675]: I0904 23:50:10.114664 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 23:50:10.118738 kubelet[2675]: I0904 23:50:10.118711 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 23:50:10.119350 kubelet[2675]: I0904 23:50:10.118894 2675 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 23:50:10.119350 kubelet[2675]: I0904 23:50:10.118930 2675 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:50:10.119350 kubelet[2675]: I0904 23:50:10.118940 2675 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 23:50:10.119350 kubelet[2675]: E0904 23:50:10.118999 2675 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:50:10.157697 kubelet[2675]: I0904 23:50:10.157642 2675 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:50:10.157697 kubelet[2675]: I0904 23:50:10.157669 2675 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:50:10.157697 kubelet[2675]: I0904 23:50:10.157696 2675 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:50:10.157962 kubelet[2675]: I0904 23:50:10.157941 2675 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:50:10.157995 kubelet[2675]: I0904 23:50:10.157957 2675 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:50:10.157995 kubelet[2675]: I0904 23:50:10.157981 2675 policy_none.go:49] "None policy: Start" Sep 4 23:50:10.158075 kubelet[2675]: I0904 23:50:10.158005 2675 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:50:10.158075 kubelet[2675]: I0904 23:50:10.158021 2675 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:50:10.158211 kubelet[2675]: I0904 23:50:10.158190 2675 state_mem.go:75] "Updated machine memory state" Sep 4 23:50:10.163813 kubelet[2675]: I0904 23:50:10.163731 2675 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 23:50:10.164249 kubelet[2675]: I0904 23:50:10.163984 2675 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:50:10.164249 kubelet[2675]: I0904 23:50:10.164001 2675 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:50:10.164587 kubelet[2675]: I0904 23:50:10.164559 2675 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:50:10.167177 kubelet[2675]: E0904 23:50:10.167106 2675 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:50:10.220508 kubelet[2675]: I0904 23:50:10.220389 2675 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:50:10.220712 kubelet[2675]: I0904 23:50:10.220539 2675 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:50:10.221294 kubelet[2675]: I0904 23:50:10.220418 2675 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 23:50:10.270440 kubelet[2675]: I0904 23:50:10.270397 2675 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 23:50:10.413524 kubelet[2675]: I0904 23:50:10.413428 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9de77078d0fffcc316044a0a28af34ac-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9de77078d0fffcc316044a0a28af34ac\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:50:10.413524 kubelet[2675]: I0904 23:50:10.413515 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9de77078d0fffcc316044a0a28af34ac-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9de77078d0fffcc316044a0a28af34ac\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:50:10.413768 kubelet[2675]: I0904 23:50:10.413552 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:50:10.413768 kubelet[2675]: I0904 23:50:10.413571 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:50:10.413768 kubelet[2675]: I0904 23:50:10.413595 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:50:10.413768 kubelet[2675]: I0904 23:50:10.413616 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:50:10.413768 kubelet[2675]: I0904 23:50:10.413637 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 23:50:10.413970 kubelet[2675]: I0904 23:50:10.413657 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9de77078d0fffcc316044a0a28af34ac-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9de77078d0fffcc316044a0a28af34ac\") " pod="kube-system/kube-apiserver-localhost" Sep 4 23:50:10.413970 kubelet[2675]: I0904 23:50:10.413676 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 23:50:11.010746 kubelet[2675]: E0904 23:50:11.010676 2675 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 23:50:11.011077 kubelet[2675]: E0904 23:50:11.010926 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:11.093617 kubelet[2675]: I0904 23:50:11.093489 2675 apiserver.go:52] "Watching apiserver" Sep 4 23:50:11.112743 kubelet[2675]: I0904 23:50:11.112608 2675 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:50:11.132777 kubelet[2675]: I0904 23:50:11.132667 2675 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 23:50:11.624763 kubelet[2675]: E0904 23:50:11.624710 2675 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 4 23:50:11.624931 kubelet[2675]: E0904 23:50:11.624904 2675 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 23:50:11.624992 kubelet[2675]: E0904 23:50:11.624951 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:11.625124 kubelet[2675]: E0904 23:50:11.625066 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:11.626300 kubelet[2675]: E0904 23:50:11.626273 2675 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 4 23:50:11.626693 kubelet[2675]: E0904 23:50:11.626376 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:11.627460 kubelet[2675]: I0904 23:50:11.627418 2675 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 23:50:11.627513 kubelet[2675]: I0904 23:50:11.627487 2675 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 23:50:12.133912 kubelet[2675]: E0904 23:50:12.133857 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:12.134526 kubelet[2675]: E0904 23:50:12.134116 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:12.134526 kubelet[2675]: E0904 23:50:12.134196 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:13.136323 kubelet[2675]: E0904 23:50:13.136275 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:13.315118 kubelet[2675]: E0904 23:50:13.315067 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:14.138662 kubelet[2675]: E0904 23:50:14.138617 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:14.139301 kubelet[2675]: E0904 23:50:14.138812 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:15.140189 kubelet[2675]: E0904 23:50:15.140130 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:16.141458 kubelet[2675]: E0904 23:50:16.141419 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:16.788654 kubelet[2675]: E0904 23:50:16.788551 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:17.142625 kubelet[2675]: E0904 23:50:17.142455 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:18.143515 kubelet[2675]: E0904 23:50:18.143448 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:21.761631 kubelet[2675]: I0904 23:50:21.761391 2675 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:50:21.762125 containerd[1500]: time="2025-09-04T23:50:21.761884892Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:50:21.762484 kubelet[2675]: I0904 23:50:21.762271 2675 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:50:22.458461 systemd[1]: Created slice kubepods-besteffort-pod58d79f49_84d4_4944_9d38_2e9d2b574446.slice - libcontainer container kubepods-besteffort-pod58d79f49_84d4_4944_9d38_2e9d2b574446.slice. Sep 4 23:50:22.487477 kubelet[2675]: I0904 23:50:22.487432 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58d79f49-84d4-4944-9d38-2e9d2b574446-lib-modules\") pod \"kube-proxy-7fr8d\" (UID: \"58d79f49-84d4-4944-9d38-2e9d2b574446\") " pod="kube-system/kube-proxy-7fr8d" Sep 4 23:50:22.488101 kubelet[2675]: I0904 23:50:22.488073 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58d79f49-84d4-4944-9d38-2e9d2b574446-xtables-lock\") pod \"kube-proxy-7fr8d\" (UID: \"58d79f49-84d4-4944-9d38-2e9d2b574446\") " pod="kube-system/kube-proxy-7fr8d" Sep 4 23:50:22.488155 kubelet[2675]: I0904 23:50:22.488111 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlh97\" (UniqueName: \"kubernetes.io/projected/58d79f49-84d4-4944-9d38-2e9d2b574446-kube-api-access-zlh97\") pod \"kube-proxy-7fr8d\" (UID: \"58d79f49-84d4-4944-9d38-2e9d2b574446\") " pod="kube-system/kube-proxy-7fr8d" Sep 4 23:50:22.488155 kubelet[2675]: I0904 23:50:22.488140 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/58d79f49-84d4-4944-9d38-2e9d2b574446-kube-proxy\") pod \"kube-proxy-7fr8d\" (UID: \"58d79f49-84d4-4944-9d38-2e9d2b574446\") " pod="kube-system/kube-proxy-7fr8d" Sep 4 23:50:22.783431 kubelet[2675]: E0904 23:50:22.783342 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:22.784302 containerd[1500]: time="2025-09-04T23:50:22.784249623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fr8d,Uid:58d79f49-84d4-4944-9d38-2e9d2b574446,Namespace:kube-system,Attempt:0,}" Sep 4 23:50:23.665030 systemd[1]: Created slice kubepods-besteffort-pod2d10ec9a_ebb5_49fa_b31d_3069031e2e54.slice - libcontainer container kubepods-besteffort-pod2d10ec9a_ebb5_49fa_b31d_3069031e2e54.slice. Sep 4 23:50:23.675213 containerd[1500]: time="2025-09-04T23:50:23.675072156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:23.675213 containerd[1500]: time="2025-09-04T23:50:23.675175319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:23.675213 containerd[1500]: time="2025-09-04T23:50:23.675199245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:23.675406 containerd[1500]: time="2025-09-04T23:50:23.675326123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:23.699312 kubelet[2675]: I0904 23:50:23.699272 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2d10ec9a-ebb5-49fa-b31d-3069031e2e54-var-lib-calico\") pod \"tigera-operator-755d956888-rtfj5\" (UID: \"2d10ec9a-ebb5-49fa-b31d-3069031e2e54\") " pod="tigera-operator/tigera-operator-755d956888-rtfj5" Sep 4 23:50:23.699517 kubelet[2675]: I0904 23:50:23.699469 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bdb\" (UniqueName: \"kubernetes.io/projected/2d10ec9a-ebb5-49fa-b31d-3069031e2e54-kube-api-access-l8bdb\") pod \"tigera-operator-755d956888-rtfj5\" (UID: \"2d10ec9a-ebb5-49fa-b31d-3069031e2e54\") " pod="tigera-operator/tigera-operator-755d956888-rtfj5" Sep 4 23:50:23.708290 systemd[1]: Started cri-containerd-510d86b7e86498e0eeccb96e17ee8809aca4408b327a931bf2f711bcdf44f591.scope - libcontainer container 510d86b7e86498e0eeccb96e17ee8809aca4408b327a931bf2f711bcdf44f591. Sep 4 23:50:23.748831 containerd[1500]: time="2025-09-04T23:50:23.748745993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7fr8d,Uid:58d79f49-84d4-4944-9d38-2e9d2b574446,Namespace:kube-system,Attempt:0,} returns sandbox id \"510d86b7e86498e0eeccb96e17ee8809aca4408b327a931bf2f711bcdf44f591\"" Sep 4 23:50:23.750144 kubelet[2675]: E0904 23:50:23.749995 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:23.752347 containerd[1500]: time="2025-09-04T23:50:23.752165297Z" level=info msg="CreateContainer within sandbox \"510d86b7e86498e0eeccb96e17ee8809aca4408b327a931bf2f711bcdf44f591\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:50:24.570371 containerd[1500]: time="2025-09-04T23:50:24.570292367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rtfj5,Uid:2d10ec9a-ebb5-49fa-b31d-3069031e2e54,Namespace:tigera-operator,Attempt:0,}" Sep 4 23:50:25.306019 containerd[1500]: time="2025-09-04T23:50:25.305720047Z" level=info msg="CreateContainer within sandbox \"510d86b7e86498e0eeccb96e17ee8809aca4408b327a931bf2f711bcdf44f591\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"74baf7d5637c36802b07d9f4ca6e000b5dcfd8afff44bc9fe3b021b005e91741\"" Sep 4 23:50:25.306567 containerd[1500]: time="2025-09-04T23:50:25.306540736Z" level=info msg="StartContainer for \"74baf7d5637c36802b07d9f4ca6e000b5dcfd8afff44bc9fe3b021b005e91741\"" Sep 4 23:50:25.346339 systemd[1]: Started cri-containerd-74baf7d5637c36802b07d9f4ca6e000b5dcfd8afff44bc9fe3b021b005e91741.scope - libcontainer container 74baf7d5637c36802b07d9f4ca6e000b5dcfd8afff44bc9fe3b021b005e91741. Sep 4 23:50:25.595116 containerd[1500]: time="2025-09-04T23:50:25.594811584Z" level=info msg="StartContainer for \"74baf7d5637c36802b07d9f4ca6e000b5dcfd8afff44bc9fe3b021b005e91741\" returns successfully" Sep 4 23:50:26.085811 containerd[1500]: time="2025-09-04T23:50:26.084753018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:50:26.085811 containerd[1500]: time="2025-09-04T23:50:26.084826255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:50:26.085811 containerd[1500]: time="2025-09-04T23:50:26.084859487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:26.085811 containerd[1500]: time="2025-09-04T23:50:26.084993178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:50:26.124222 systemd[1]: Started cri-containerd-b004a0a4f3d9d19108ad1b19182a79583ca37805db641aa08882531af55b3eff.scope - libcontainer container b004a0a4f3d9d19108ad1b19182a79583ca37805db641aa08882531af55b3eff. Sep 4 23:50:26.170345 kubelet[2675]: E0904 23:50:26.169811 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:26.175519 containerd[1500]: time="2025-09-04T23:50:26.175462887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-rtfj5,Uid:2d10ec9a-ebb5-49fa-b31d-3069031e2e54,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b004a0a4f3d9d19108ad1b19182a79583ca37805db641aa08882531af55b3eff\"" Sep 4 23:50:26.177941 containerd[1500]: time="2025-09-04T23:50:26.177889830Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 4 23:50:26.385377 kubelet[2675]: I0904 23:50:26.385162 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7fr8d" podStartSLOduration=4.362391222 podStartE2EDuration="4.362391222s" podCreationTimestamp="2025-09-04 23:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:50:26.362005789 +0000 UTC m=+16.339926089" watchObservedRunningTime="2025-09-04 23:50:26.362391222 +0000 UTC m=+16.340311522" Sep 4 23:50:27.173074 kubelet[2675]: E0904 23:50:27.173010 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:31.554992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3204179731.mount: Deactivated successfully. Sep 4 23:50:35.141350 containerd[1500]: time="2025-09-04T23:50:35.141260617Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:35.277237 containerd[1500]: time="2025-09-04T23:50:35.277144070Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 4 23:50:35.573909 containerd[1500]: time="2025-09-04T23:50:35.573832622Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:36.032552 containerd[1500]: time="2025-09-04T23:50:36.032474342Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:50:36.033477 containerd[1500]: time="2025-09-04T23:50:36.033433709Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 9.855495728s" Sep 4 23:50:36.033540 containerd[1500]: time="2025-09-04T23:50:36.033478695Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 4 23:50:36.035848 containerd[1500]: time="2025-09-04T23:50:36.035811035Z" level=info msg="CreateContainer within sandbox \"b004a0a4f3d9d19108ad1b19182a79583ca37805db641aa08882531af55b3eff\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 23:50:41.682075 containerd[1500]: time="2025-09-04T23:50:41.681946981Z" level=info msg="CreateContainer within sandbox \"b004a0a4f3d9d19108ad1b19182a79583ca37805db641aa08882531af55b3eff\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2a6d1ac1770be399dd46ae547e420ac032dc04b189be436d0e21611048b7cbc0\"" Sep 4 23:50:41.682720 containerd[1500]: time="2025-09-04T23:50:41.682670148Z" level=info msg="StartContainer for \"2a6d1ac1770be399dd46ae547e420ac032dc04b189be436d0e21611048b7cbc0\"" Sep 4 23:50:41.728382 systemd[1]: Started cri-containerd-2a6d1ac1770be399dd46ae547e420ac032dc04b189be436d0e21611048b7cbc0.scope - libcontainer container 2a6d1ac1770be399dd46ae547e420ac032dc04b189be436d0e21611048b7cbc0. Sep 4 23:50:45.218353 containerd[1500]: time="2025-09-04T23:50:45.218113606Z" level=info msg="StartContainer for \"2a6d1ac1770be399dd46ae547e420ac032dc04b189be436d0e21611048b7cbc0\" returns successfully" Sep 4 23:50:45.220647 kubelet[2675]: E0904 23:50:45.219859 2675 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.101s" Sep 4 23:50:56.127890 sudo[1694]: pam_unix(sudo:session): session closed for user root Sep 4 23:50:56.133543 sshd[1693]: Connection closed by 10.0.0.1 port 41772 Sep 4 23:50:56.153950 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 4 23:50:56.163249 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:41772.service: Deactivated successfully. Sep 4 23:50:56.169283 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:50:56.169576 systemd[1]: session-7.scope: Consumed 6.020s CPU time, 217.5M memory peak. Sep 4 23:50:56.172577 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:50:56.174864 systemd-logind[1485]: Removed session 7. Sep 4 23:50:58.733238 kubelet[2675]: I0904 23:50:58.732402 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-rtfj5" podStartSLOduration=26.875238138 podStartE2EDuration="36.732379553s" podCreationTimestamp="2025-09-04 23:50:22 +0000 UTC" firstStartedPulling="2025-09-04 23:50:26.177322737 +0000 UTC m=+16.155243037" lastFinishedPulling="2025-09-04 23:50:36.034464152 +0000 UTC m=+26.012384452" observedRunningTime="2025-09-04 23:50:46.462983181 +0000 UTC m=+36.440903491" watchObservedRunningTime="2025-09-04 23:50:58.732379553 +0000 UTC m=+48.710299853" Sep 4 23:50:58.750195 systemd[1]: Created slice kubepods-besteffort-pod2d90e856_df4a_46de_ac76_ba54e6521b5a.slice - libcontainer container kubepods-besteffort-pod2d90e856_df4a_46de_ac76_ba54e6521b5a.slice. Sep 4 23:50:58.820598 systemd[1]: Created slice kubepods-besteffort-poda944b2b5_20b5_48cb_bb5a_5db5fc4f4059.slice - libcontainer container kubepods-besteffort-poda944b2b5_20b5_48cb_bb5a_5db5fc4f4059.slice. Sep 4 23:50:58.826893 kubelet[2675]: I0904 23:50:58.826833 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-cni-bin-dir\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.826893 kubelet[2675]: I0904 23:50:58.826879 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-cni-net-dir\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.826893 kubelet[2675]: I0904 23:50:58.826903 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-lib-modules\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.827186 kubelet[2675]: I0904 23:50:58.826925 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2d90e856-df4a-46de-ac76-ba54e6521b5a-typha-certs\") pod \"calico-typha-9948499b-lcwpc\" (UID: \"2d90e856-df4a-46de-ac76-ba54e6521b5a\") " pod="calico-system/calico-typha-9948499b-lcwpc" Sep 4 23:50:58.827186 kubelet[2675]: I0904 23:50:58.826943 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-tigera-ca-bundle\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.827186 kubelet[2675]: I0904 23:50:58.826961 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-var-run-calico\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.827186 kubelet[2675]: I0904 23:50:58.826981 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-xtables-lock\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.827186 kubelet[2675]: I0904 23:50:58.827008 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb8n5\" (UniqueName: \"kubernetes.io/projected/2d90e856-df4a-46de-ac76-ba54e6521b5a-kube-api-access-hb8n5\") pod \"calico-typha-9948499b-lcwpc\" (UID: \"2d90e856-df4a-46de-ac76-ba54e6521b5a\") " pod="calico-system/calico-typha-9948499b-lcwpc" Sep 4 23:50:58.827396 kubelet[2675]: I0904 23:50:58.827031 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d90e856-df4a-46de-ac76-ba54e6521b5a-tigera-ca-bundle\") pod \"calico-typha-9948499b-lcwpc\" (UID: \"2d90e856-df4a-46de-ac76-ba54e6521b5a\") " pod="calico-system/calico-typha-9948499b-lcwpc" Sep 4 23:50:58.827396 kubelet[2675]: I0904 23:50:58.827082 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-flexvol-driver-host\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.827396 kubelet[2675]: I0904 23:50:58.827100 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-node-certs\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.827396 kubelet[2675]: I0904 23:50:58.827120 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-cni-log-dir\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.827396 kubelet[2675]: I0904 23:50:58.827145 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-policysync\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.827558 kubelet[2675]: I0904 23:50:58.827165 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-var-lib-calico\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.827558 kubelet[2675]: I0904 23:50:58.827196 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8rj\" (UniqueName: \"kubernetes.io/projected/a944b2b5-20b5-48cb-bb5a-5db5fc4f4059-kube-api-access-ch8rj\") pod \"calico-node-wsj4t\" (UID: \"a944b2b5-20b5-48cb-bb5a-5db5fc4f4059\") " pod="calico-system/calico-node-wsj4t" Sep 4 23:50:58.945107 kubelet[2675]: E0904 23:50:58.940409 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:50:58.954615 kubelet[2675]: E0904 23:50:58.954558 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:58.954757 kubelet[2675]: W0904 23:50:58.954706 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:58.955963 kubelet[2675]: E0904 23:50:58.954911 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:58.959082 kubelet[2675]: E0904 23:50:58.956302 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:58.959082 kubelet[2675]: W0904 23:50:58.956595 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:58.959082 kubelet[2675]: E0904 23:50:58.956617 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:58.959933 kubelet[2675]: E0904 23:50:58.959893 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:58.959933 kubelet[2675]: W0904 23:50:58.959925 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:58.960025 kubelet[2675]: E0904 23:50:58.959953 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:58.960321 kubelet[2675]: E0904 23:50:58.960300 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:58.960321 kubelet[2675]: W0904 23:50:58.960316 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:58.960402 kubelet[2675]: E0904 23:50:58.960327 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.028488 kubelet[2675]: E0904 23:50:59.028433 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.028488 kubelet[2675]: W0904 23:50:59.028474 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.028488 kubelet[2675]: E0904 23:50:59.028507 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.028911 kubelet[2675]: E0904 23:50:59.028846 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.028911 kubelet[2675]: W0904 23:50:59.028858 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.028911 kubelet[2675]: E0904 23:50:59.028870 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.029326 kubelet[2675]: E0904 23:50:59.029286 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.029326 kubelet[2675]: W0904 23:50:59.029304 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.029326 kubelet[2675]: E0904 23:50:59.029316 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.032014 kubelet[2675]: E0904 23:50:59.031978 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.032014 kubelet[2675]: W0904 23:50:59.031997 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.032014 kubelet[2675]: E0904 23:50:59.032010 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.032602 kubelet[2675]: E0904 23:50:59.032572 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.032664 kubelet[2675]: W0904 23:50:59.032601 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.032664 kubelet[2675]: E0904 23:50:59.032627 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.032906 kubelet[2675]: E0904 23:50:59.032879 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.032906 kubelet[2675]: W0904 23:50:59.032892 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.032906 kubelet[2675]: E0904 23:50:59.032904 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.033203 kubelet[2675]: E0904 23:50:59.033173 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.033203 kubelet[2675]: W0904 23:50:59.033188 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.033203 kubelet[2675]: E0904 23:50:59.033200 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.033481 kubelet[2675]: E0904 23:50:59.033462 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.033481 kubelet[2675]: W0904 23:50:59.033478 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.033564 kubelet[2675]: E0904 23:50:59.033491 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.033772 kubelet[2675]: E0904 23:50:59.033753 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.033772 kubelet[2675]: W0904 23:50:59.033766 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.033855 kubelet[2675]: E0904 23:50:59.033778 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.034029 kubelet[2675]: E0904 23:50:59.034011 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.034029 kubelet[2675]: W0904 23:50:59.034024 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.034134 kubelet[2675]: E0904 23:50:59.034059 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.034336 kubelet[2675]: E0904 23:50:59.034316 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.034336 kubelet[2675]: W0904 23:50:59.034330 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.034416 kubelet[2675]: E0904 23:50:59.034342 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.034605 kubelet[2675]: E0904 23:50:59.034587 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.034605 kubelet[2675]: W0904 23:50:59.034600 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.034688 kubelet[2675]: E0904 23:50:59.034613 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.034881 kubelet[2675]: E0904 23:50:59.034862 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.034881 kubelet[2675]: W0904 23:50:59.034875 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.034965 kubelet[2675]: E0904 23:50:59.034888 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.035166 kubelet[2675]: E0904 23:50:59.035148 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.035166 kubelet[2675]: W0904 23:50:59.035162 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.035268 kubelet[2675]: E0904 23:50:59.035175 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.035505 kubelet[2675]: E0904 23:50:59.035486 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.035505 kubelet[2675]: W0904 23:50:59.035499 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.035584 kubelet[2675]: E0904 23:50:59.035511 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.035773 kubelet[2675]: E0904 23:50:59.035755 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.035773 kubelet[2675]: W0904 23:50:59.035768 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.035849 kubelet[2675]: E0904 23:50:59.035780 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.036101 kubelet[2675]: E0904 23:50:59.036078 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.036101 kubelet[2675]: W0904 23:50:59.036092 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.036186 kubelet[2675]: E0904 23:50:59.036105 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.036421 kubelet[2675]: E0904 23:50:59.036383 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.036421 kubelet[2675]: W0904 23:50:59.036399 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.036421 kubelet[2675]: E0904 23:50:59.036411 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.036685 kubelet[2675]: E0904 23:50:59.036659 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.036685 kubelet[2675]: W0904 23:50:59.036675 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.036766 kubelet[2675]: E0904 23:50:59.036687 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.036999 kubelet[2675]: E0904 23:50:59.036972 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.036999 kubelet[2675]: W0904 23:50:59.036986 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.036999 kubelet[2675]: E0904 23:50:59.036998 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.056354 kubelet[2675]: E0904 23:50:59.056282 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:50:59.095198 containerd[1500]: time="2025-09-04T23:50:59.095128815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9948499b-lcwpc,Uid:2d90e856-df4a-46de-ac76-ba54e6521b5a,Namespace:calico-system,Attempt:0,}" Sep 4 23:50:59.126775 containerd[1500]: time="2025-09-04T23:50:59.126705620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wsj4t,Uid:a944b2b5-20b5-48cb-bb5a-5db5fc4f4059,Namespace:calico-system,Attempt:0,}" Sep 4 23:50:59.128617 kubelet[2675]: E0904 23:50:59.128577 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.128617 kubelet[2675]: W0904 23:50:59.128600 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.128617 kubelet[2675]: E0904 23:50:59.128622 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.128870 kubelet[2675]: I0904 23:50:59.128655 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bhg\" (UniqueName: \"kubernetes.io/projected/22e67ab5-9d3d-4526-b31b-64c19a0aca9b-kube-api-access-s6bhg\") pod \"csi-node-driver-t8wcm\" (UID: \"22e67ab5-9d3d-4526-b31b-64c19a0aca9b\") " pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:50:59.128917 kubelet[2675]: E0904 23:50:59.128896 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.128917 kubelet[2675]: W0904 23:50:59.128910 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.128982 kubelet[2675]: E0904 23:50:59.128927 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.128982 kubelet[2675]: I0904 23:50:59.128947 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/22e67ab5-9d3d-4526-b31b-64c19a0aca9b-socket-dir\") pod \"csi-node-driver-t8wcm\" (UID: \"22e67ab5-9d3d-4526-b31b-64c19a0aca9b\") " pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:50:59.129250 kubelet[2675]: E0904 23:50:59.129229 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.129250 kubelet[2675]: W0904 23:50:59.129245 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.129336 kubelet[2675]: E0904 23:50:59.129261 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.129504 kubelet[2675]: E0904 23:50:59.129473 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.129504 kubelet[2675]: W0904 23:50:59.129488 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.129579 kubelet[2675]: E0904 23:50:59.129504 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.129743 kubelet[2675]: E0904 23:50:59.129715 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.129743 kubelet[2675]: W0904 23:50:59.129728 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.129823 kubelet[2675]: E0904 23:50:59.129743 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.129823 kubelet[2675]: I0904 23:50:59.129764 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/22e67ab5-9d3d-4526-b31b-64c19a0aca9b-registration-dir\") pod \"csi-node-driver-t8wcm\" (UID: \"22e67ab5-9d3d-4526-b31b-64c19a0aca9b\") " pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:50:59.130045 kubelet[2675]: E0904 23:50:59.130011 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.130045 kubelet[2675]: W0904 23:50:59.130029 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.130105 kubelet[2675]: E0904 23:50:59.130062 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.130366 kubelet[2675]: E0904 23:50:59.130349 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.130366 kubelet[2675]: W0904 23:50:59.130364 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.130444 kubelet[2675]: E0904 23:50:59.130381 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.130444 kubelet[2675]: I0904 23:50:59.130402 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22e67ab5-9d3d-4526-b31b-64c19a0aca9b-kubelet-dir\") pod \"csi-node-driver-t8wcm\" (UID: \"22e67ab5-9d3d-4526-b31b-64c19a0aca9b\") " pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:50:59.130612 kubelet[2675]: E0904 23:50:59.130595 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.130612 kubelet[2675]: W0904 23:50:59.130610 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.130710 kubelet[2675]: E0904 23:50:59.130626 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.130854 kubelet[2675]: E0904 23:50:59.130837 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.130885 kubelet[2675]: W0904 23:50:59.130852 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.130885 kubelet[2675]: E0904 23:50:59.130866 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.131141 kubelet[2675]: E0904 23:50:59.131124 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.131141 kubelet[2675]: W0904 23:50:59.131138 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.131270 kubelet[2675]: E0904 23:50:59.131155 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.131376 kubelet[2675]: E0904 23:50:59.131360 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.131376 kubelet[2675]: W0904 23:50:59.131372 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.131450 kubelet[2675]: E0904 23:50:59.131383 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.131706 kubelet[2675]: E0904 23:50:59.131671 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.131706 kubelet[2675]: W0904 23:50:59.131693 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.131808 kubelet[2675]: E0904 23:50:59.131712 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.132019 kubelet[2675]: E0904 23:50:59.131979 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.132019 kubelet[2675]: W0904 23:50:59.131996 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.132019 kubelet[2675]: E0904 23:50:59.132013 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.132151 kubelet[2675]: I0904 23:50:59.132058 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/22e67ab5-9d3d-4526-b31b-64c19a0aca9b-varrun\") pod \"csi-node-driver-t8wcm\" (UID: \"22e67ab5-9d3d-4526-b31b-64c19a0aca9b\") " pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:50:59.132342 kubelet[2675]: E0904 23:50:59.132324 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.132342 kubelet[2675]: W0904 23:50:59.132337 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.132399 kubelet[2675]: E0904 23:50:59.132347 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.132552 kubelet[2675]: E0904 23:50:59.132537 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.132552 kubelet[2675]: W0904 23:50:59.132548 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.132628 kubelet[2675]: E0904 23:50:59.132558 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.233866 kubelet[2675]: E0904 23:50:59.233784 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.233866 kubelet[2675]: W0904 23:50:59.233820 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.233866 kubelet[2675]: E0904 23:50:59.233862 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.234309 kubelet[2675]: E0904 23:50:59.234207 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.234309 kubelet[2675]: W0904 23:50:59.234241 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.234309 kubelet[2675]: E0904 23:50:59.234264 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.235096 kubelet[2675]: E0904 23:50:59.234595 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.235096 kubelet[2675]: W0904 23:50:59.234611 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.235096 kubelet[2675]: E0904 23:50:59.234631 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.235096 kubelet[2675]: E0904 23:50:59.234960 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.235096 kubelet[2675]: W0904 23:50:59.234989 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.235096 kubelet[2675]: E0904 23:50:59.235017 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.235489 kubelet[2675]: E0904 23:50:59.235460 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.235489 kubelet[2675]: W0904 23:50:59.235474 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.235566 kubelet[2675]: E0904 23:50:59.235493 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.235785 kubelet[2675]: E0904 23:50:59.235765 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.235785 kubelet[2675]: W0904 23:50:59.235782 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.235863 kubelet[2675]: E0904 23:50:59.235804 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.236128 kubelet[2675]: E0904 23:50:59.236106 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.236128 kubelet[2675]: W0904 23:50:59.236127 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.236193 kubelet[2675]: E0904 23:50:59.236175 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.236454 kubelet[2675]: E0904 23:50:59.236424 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.236454 kubelet[2675]: W0904 23:50:59.236439 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.236858 kubelet[2675]: E0904 23:50:59.236588 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.236858 kubelet[2675]: E0904 23:50:59.236642 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.236858 kubelet[2675]: W0904 23:50:59.236650 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.236858 kubelet[2675]: E0904 23:50:59.236674 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.237061 kubelet[2675]: E0904 23:50:59.237025 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.237117 kubelet[2675]: W0904 23:50:59.237062 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.237117 kubelet[2675]: E0904 23:50:59.237106 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.237405 kubelet[2675]: E0904 23:50:59.237376 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.237405 kubelet[2675]: W0904 23:50:59.237390 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.237479 kubelet[2675]: E0904 23:50:59.237412 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.237723 kubelet[2675]: E0904 23:50:59.237703 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.237723 kubelet[2675]: W0904 23:50:59.237717 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.237799 kubelet[2675]: E0904 23:50:59.237736 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.238009 kubelet[2675]: E0904 23:50:59.237985 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.238009 kubelet[2675]: W0904 23:50:59.238001 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.238141 kubelet[2675]: E0904 23:50:59.238018 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.238283 kubelet[2675]: E0904 23:50:59.238264 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.238283 kubelet[2675]: W0904 23:50:59.238276 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.238348 kubelet[2675]: E0904 23:50:59.238309 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.238530 kubelet[2675]: E0904 23:50:59.238510 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.238530 kubelet[2675]: W0904 23:50:59.238523 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.238612 kubelet[2675]: E0904 23:50:59.238550 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.238756 kubelet[2675]: E0904 23:50:59.238738 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.238756 kubelet[2675]: W0904 23:50:59.238749 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.238822 kubelet[2675]: E0904 23:50:59.238770 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.238977 kubelet[2675]: E0904 23:50:59.238958 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.238977 kubelet[2675]: W0904 23:50:59.238969 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.239074 kubelet[2675]: E0904 23:50:59.238989 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.239195 kubelet[2675]: E0904 23:50:59.239178 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.239195 kubelet[2675]: W0904 23:50:59.239189 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.239266 kubelet[2675]: E0904 23:50:59.239202 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.239433 kubelet[2675]: E0904 23:50:59.239413 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.239433 kubelet[2675]: W0904 23:50:59.239428 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.239490 kubelet[2675]: E0904 23:50:59.239445 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.239727 kubelet[2675]: E0904 23:50:59.239708 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.239727 kubelet[2675]: W0904 23:50:59.239720 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.239817 kubelet[2675]: E0904 23:50:59.239737 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.240133 kubelet[2675]: E0904 23:50:59.240111 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.240186 kubelet[2675]: W0904 23:50:59.240137 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.240186 kubelet[2675]: E0904 23:50:59.240155 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.240438 kubelet[2675]: E0904 23:50:59.240417 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.240438 kubelet[2675]: W0904 23:50:59.240430 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.240504 kubelet[2675]: E0904 23:50:59.240444 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.240745 kubelet[2675]: E0904 23:50:59.240722 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.240792 kubelet[2675]: W0904 23:50:59.240755 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.240792 kubelet[2675]: E0904 23:50:59.240774 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.241126 kubelet[2675]: E0904 23:50:59.241103 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.241126 kubelet[2675]: W0904 23:50:59.241119 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.241126 kubelet[2675]: E0904 23:50:59.241130 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.273018 kubelet[2675]: E0904 23:50:59.272966 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.273018 kubelet[2675]: W0904 23:50:59.273001 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.273018 kubelet[2675]: E0904 23:50:59.273047 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:50:59.412026 kubelet[2675]: E0904 23:50:59.411883 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:50:59.412026 kubelet[2675]: W0904 23:50:59.411915 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:50:59.412026 kubelet[2675]: E0904 23:50:59.411941 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:00.251690 containerd[1500]: time="2025-09-04T23:51:00.250964384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:51:00.252715 containerd[1500]: time="2025-09-04T23:51:00.252477081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:51:00.252715 containerd[1500]: time="2025-09-04T23:51:00.252500325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:00.253699 containerd[1500]: time="2025-09-04T23:51:00.253548338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:00.258769 containerd[1500]: time="2025-09-04T23:51:00.257729269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:51:00.258769 containerd[1500]: time="2025-09-04T23:51:00.257879356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:51:00.258769 containerd[1500]: time="2025-09-04T23:51:00.257900235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:00.267442 containerd[1500]: time="2025-09-04T23:51:00.267009700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:00.308296 systemd[1]: Started cri-containerd-0ea314f5b6331f33a88121525da492ff33ef6070f33cf93d32e2a001431b9c94.scope - libcontainer container 0ea314f5b6331f33a88121525da492ff33ef6070f33cf93d32e2a001431b9c94. Sep 4 23:51:00.310521 systemd[1]: Started cri-containerd-acc1c54dc5b19887225796e92abd1fbf55ebfb7ba1de23c6dace20f3dd56c5af.scope - libcontainer container acc1c54dc5b19887225796e92abd1fbf55ebfb7ba1de23c6dace20f3dd56c5af. Sep 4 23:51:00.409270 containerd[1500]: time="2025-09-04T23:51:00.408130316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9948499b-lcwpc,Uid:2d90e856-df4a-46de-ac76-ba54e6521b5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ea314f5b6331f33a88121525da492ff33ef6070f33cf93d32e2a001431b9c94\"" Sep 4 23:51:00.410421 kubelet[2675]: E0904 23:51:00.410393 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:00.412299 containerd[1500]: time="2025-09-04T23:51:00.412246364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 4 23:51:00.421499 containerd[1500]: time="2025-09-04T23:51:00.421449657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wsj4t,Uid:a944b2b5-20b5-48cb-bb5a-5db5fc4f4059,Namespace:calico-system,Attempt:0,} returns sandbox id \"acc1c54dc5b19887225796e92abd1fbf55ebfb7ba1de23c6dace20f3dd56c5af\"" Sep 4 23:51:01.120064 kubelet[2675]: E0904 23:51:01.119985 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:01.216223 systemd[1]: run-containerd-runc-k8s.io-0ea314f5b6331f33a88121525da492ff33ef6070f33cf93d32e2a001431b9c94-runc.ESkIaj.mount: Deactivated successfully. Sep 4 23:51:03.120354 kubelet[2675]: E0904 23:51:03.120238 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:04.273833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308586288.mount: Deactivated successfully. Sep 4 23:51:05.119876 kubelet[2675]: E0904 23:51:05.119786 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:06.800941 containerd[1500]: time="2025-09-04T23:51:06.800853253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:06.836769 containerd[1500]: time="2025-09-04T23:51:06.836667941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 4 23:51:06.926015 containerd[1500]: time="2025-09-04T23:51:06.925934987Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:06.985873 containerd[1500]: time="2025-09-04T23:51:06.985798908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:06.987024 containerd[1500]: time="2025-09-04T23:51:06.986933210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 6.574398748s" Sep 4 23:51:06.987108 containerd[1500]: time="2025-09-04T23:51:06.987029403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 4 23:51:07.012702 containerd[1500]: time="2025-09-04T23:51:07.012644573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 4 23:51:07.096573 containerd[1500]: time="2025-09-04T23:51:07.096438255Z" level=info msg="CreateContainer within sandbox \"0ea314f5b6331f33a88121525da492ff33ef6070f33cf93d32e2a001431b9c94\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 23:51:07.120194 kubelet[2675]: E0904 23:51:07.120090 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:07.199719 containerd[1500]: time="2025-09-04T23:51:07.199171120Z" level=info msg="CreateContainer within sandbox \"0ea314f5b6331f33a88121525da492ff33ef6070f33cf93d32e2a001431b9c94\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b88ee043cf51a47c9bb0e4e4c4a7b38e029c6f82614d48bc3f07a86b5b16fea5\"" Sep 4 23:51:07.206665 containerd[1500]: time="2025-09-04T23:51:07.204668613Z" level=info msg="StartContainer for \"b88ee043cf51a47c9bb0e4e4c4a7b38e029c6f82614d48bc3f07a86b5b16fea5\"" Sep 4 23:51:07.242789 systemd[1]: Started cri-containerd-b88ee043cf51a47c9bb0e4e4c4a7b38e029c6f82614d48bc3f07a86b5b16fea5.scope - libcontainer container b88ee043cf51a47c9bb0e4e4c4a7b38e029c6f82614d48bc3f07a86b5b16fea5. Sep 4 23:51:07.381303 containerd[1500]: time="2025-09-04T23:51:07.381111324Z" level=info msg="StartContainer for \"b88ee043cf51a47c9bb0e4e4c4a7b38e029c6f82614d48bc3f07a86b5b16fea5\" returns successfully" Sep 4 23:51:08.278334 kubelet[2675]: E0904 23:51:08.278181 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:08.293816 kubelet[2675]: E0904 23:51:08.293741 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.293816 kubelet[2675]: W0904 23:51:08.293783 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.293816 kubelet[2675]: E0904 23:51:08.293824 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.294201 kubelet[2675]: E0904 23:51:08.294178 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.294201 kubelet[2675]: W0904 23:51:08.294193 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.294324 kubelet[2675]: E0904 23:51:08.294220 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.294324 kubelet[2675]: E0904 23:51:08.294617 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.294324 kubelet[2675]: W0904 23:51:08.294632 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.294324 kubelet[2675]: E0904 23:51:08.294646 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.295447 kubelet[2675]: E0904 23:51:08.295159 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.295447 kubelet[2675]: W0904 23:51:08.295182 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.295447 kubelet[2675]: E0904 23:51:08.295198 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.296266 kubelet[2675]: E0904 23:51:08.295992 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.296371 kubelet[2675]: W0904 23:51:08.296304 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.296371 kubelet[2675]: E0904 23:51:08.296331 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.296865 kubelet[2675]: E0904 23:51:08.296821 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.296865 kubelet[2675]: W0904 23:51:08.296859 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.297010 kubelet[2675]: E0904 23:51:08.296874 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.297563 kubelet[2675]: E0904 23:51:08.297340 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.297563 kubelet[2675]: W0904 23:51:08.297371 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.297563 kubelet[2675]: E0904 23:51:08.297405 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.297897 kubelet[2675]: E0904 23:51:08.297845 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.297959 kubelet[2675]: W0904 23:51:08.297896 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.297959 kubelet[2675]: E0904 23:51:08.297935 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.298754 kubelet[2675]: E0904 23:51:08.298590 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.298754 kubelet[2675]: W0904 23:51:08.298610 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.298754 kubelet[2675]: E0904 23:51:08.298626 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.299082 kubelet[2675]: E0904 23:51:08.299063 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.299082 kubelet[2675]: W0904 23:51:08.299079 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.299421 kubelet[2675]: E0904 23:51:08.299092 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.300069 kubelet[2675]: E0904 23:51:08.299987 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.300069 kubelet[2675]: W0904 23:51:08.300021 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.300069 kubelet[2675]: E0904 23:51:08.300076 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.300644 kubelet[2675]: E0904 23:51:08.300611 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.300644 kubelet[2675]: W0904 23:51:08.300643 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.300794 kubelet[2675]: E0904 23:51:08.300665 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.301640 kubelet[2675]: E0904 23:51:08.301592 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.302280 kubelet[2675]: W0904 23:51:08.301629 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.302280 kubelet[2675]: E0904 23:51:08.301786 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.305062 kubelet[2675]: E0904 23:51:08.303028 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.305062 kubelet[2675]: W0904 23:51:08.303121 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.305062 kubelet[2675]: E0904 23:51:08.303137 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.305062 kubelet[2675]: E0904 23:51:08.303552 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.305062 kubelet[2675]: W0904 23:51:08.303604 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.305062 kubelet[2675]: E0904 23:51:08.303644 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.305391 kubelet[2675]: E0904 23:51:08.305130 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.305391 kubelet[2675]: W0904 23:51:08.305144 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.305391 kubelet[2675]: E0904 23:51:08.305161 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.307022 kubelet[2675]: E0904 23:51:08.306981 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.307022 kubelet[2675]: W0904 23:51:08.307014 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.307217 kubelet[2675]: E0904 23:51:08.307071 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.308617 kubelet[2675]: E0904 23:51:08.308591 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.308617 kubelet[2675]: W0904 23:51:08.308613 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.308800 kubelet[2675]: E0904 23:51:08.308782 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.309154 kubelet[2675]: E0904 23:51:08.309132 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.309154 kubelet[2675]: W0904 23:51:08.309153 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.309343 kubelet[2675]: E0904 23:51:08.309301 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.309650 kubelet[2675]: E0904 23:51:08.309630 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.309650 kubelet[2675]: W0904 23:51:08.309647 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.309800 kubelet[2675]: E0904 23:51:08.309778 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.310267 kubelet[2675]: E0904 23:51:08.310085 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.310267 kubelet[2675]: W0904 23:51:08.310103 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.310267 kubelet[2675]: E0904 23:51:08.310123 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.310834 kubelet[2675]: E0904 23:51:08.310700 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.310834 kubelet[2675]: W0904 23:51:08.310715 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.310834 kubelet[2675]: E0904 23:51:08.310809 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.311697 kubelet[2675]: E0904 23:51:08.311527 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.311697 kubelet[2675]: W0904 23:51:08.311543 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.311697 kubelet[2675]: E0904 23:51:08.311659 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.312155 kubelet[2675]: E0904 23:51:08.311831 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.312155 kubelet[2675]: W0904 23:51:08.311844 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.312155 kubelet[2675]: E0904 23:51:08.311937 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.313756 kubelet[2675]: E0904 23:51:08.312584 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.313756 kubelet[2675]: W0904 23:51:08.312602 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.313756 kubelet[2675]: E0904 23:51:08.313005 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.313756 kubelet[2675]: W0904 23:51:08.313017 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.313756 kubelet[2675]: E0904 23:51:08.313079 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.313756 kubelet[2675]: E0904 23:51:08.313591 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.313756 kubelet[2675]: W0904 23:51:08.313603 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.313756 kubelet[2675]: E0904 23:51:08.313616 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.313756 kubelet[2675]: E0904 23:51:08.313621 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.314384 kubelet[2675]: E0904 23:51:08.314363 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.314384 kubelet[2675]: W0904 23:51:08.314379 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.314502 kubelet[2675]: E0904 23:51:08.314424 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.315895 kubelet[2675]: E0904 23:51:08.315806 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.315895 kubelet[2675]: W0904 23:51:08.315823 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.316928 kubelet[2675]: E0904 23:51:08.315926 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.316928 kubelet[2675]: E0904 23:51:08.316227 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.316928 kubelet[2675]: W0904 23:51:08.316241 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.316928 kubelet[2675]: E0904 23:51:08.316277 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.316928 kubelet[2675]: E0904 23:51:08.316596 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.316928 kubelet[2675]: W0904 23:51:08.316623 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.316928 kubelet[2675]: E0904 23:51:08.316638 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.317199 kubelet[2675]: E0904 23:51:08.316969 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.317199 kubelet[2675]: W0904 23:51:08.316983 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.317199 kubelet[2675]: E0904 23:51:08.316998 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.318012 kubelet[2675]: E0904 23:51:08.317991 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:08.318012 kubelet[2675]: W0904 23:51:08.318008 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:08.318152 kubelet[2675]: E0904 23:51:08.318029 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:08.336591 kubelet[2675]: I0904 23:51:08.336451 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9948499b-lcwpc" podStartSLOduration=3.73551416 podStartE2EDuration="10.336416209s" podCreationTimestamp="2025-09-04 23:50:58 +0000 UTC" firstStartedPulling="2025-09-04 23:51:00.4111559 +0000 UTC m=+50.389076200" lastFinishedPulling="2025-09-04 23:51:07.012057949 +0000 UTC m=+56.989978249" observedRunningTime="2025-09-04 23:51:08.314401311 +0000 UTC m=+58.292321701" watchObservedRunningTime="2025-09-04 23:51:08.336416209 +0000 UTC m=+58.314336509" Sep 4 23:51:09.119747 kubelet[2675]: E0904 23:51:09.119676 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:09.280556 kubelet[2675]: E0904 23:51:09.280498 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:09.313772 kubelet[2675]: E0904 23:51:09.313729 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.313772 kubelet[2675]: W0904 23:51:09.313763 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.313772 kubelet[2675]: E0904 23:51:09.313791 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.314216 kubelet[2675]: E0904 23:51:09.314074 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.314216 kubelet[2675]: W0904 23:51:09.314088 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.314216 kubelet[2675]: E0904 23:51:09.314106 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.314386 kubelet[2675]: E0904 23:51:09.314329 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.314386 kubelet[2675]: W0904 23:51:09.314344 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.314386 kubelet[2675]: E0904 23:51:09.314359 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.314664 kubelet[2675]: E0904 23:51:09.314636 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.314664 kubelet[2675]: W0904 23:51:09.314652 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.314739 kubelet[2675]: E0904 23:51:09.314665 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.315071 kubelet[2675]: E0904 23:51:09.315053 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.315071 kubelet[2675]: W0904 23:51:09.315069 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.315176 kubelet[2675]: E0904 23:51:09.315082 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.315349 kubelet[2675]: E0904 23:51:09.315320 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.315349 kubelet[2675]: W0904 23:51:09.315336 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.315349 kubelet[2675]: E0904 23:51:09.315347 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.315573 kubelet[2675]: E0904 23:51:09.315556 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.315573 kubelet[2675]: W0904 23:51:09.315568 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.315642 kubelet[2675]: E0904 23:51:09.315576 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.315770 kubelet[2675]: E0904 23:51:09.315756 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.315770 kubelet[2675]: W0904 23:51:09.315766 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.315814 kubelet[2675]: E0904 23:51:09.315774 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.315985 kubelet[2675]: E0904 23:51:09.315973 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.316013 kubelet[2675]: W0904 23:51:09.315983 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.316013 kubelet[2675]: E0904 23:51:09.315992 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.316220 kubelet[2675]: E0904 23:51:09.316195 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.316220 kubelet[2675]: W0904 23:51:09.316216 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.316277 kubelet[2675]: E0904 23:51:09.316225 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.316417 kubelet[2675]: E0904 23:51:09.316405 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.316417 kubelet[2675]: W0904 23:51:09.316415 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.316458 kubelet[2675]: E0904 23:51:09.316423 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.316609 kubelet[2675]: E0904 23:51:09.316597 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.316609 kubelet[2675]: W0904 23:51:09.316607 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.316650 kubelet[2675]: E0904 23:51:09.316615 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.316818 kubelet[2675]: E0904 23:51:09.316805 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.316849 kubelet[2675]: W0904 23:51:09.316817 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.316849 kubelet[2675]: E0904 23:51:09.316827 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.317030 kubelet[2675]: E0904 23:51:09.317018 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.317030 kubelet[2675]: W0904 23:51:09.317028 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.317113 kubelet[2675]: E0904 23:51:09.317065 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.317283 kubelet[2675]: E0904 23:51:09.317266 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.317283 kubelet[2675]: W0904 23:51:09.317276 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.317344 kubelet[2675]: E0904 23:51:09.317284 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.318545 kubelet[2675]: E0904 23:51:09.318528 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.318545 kubelet[2675]: W0904 23:51:09.318543 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.318632 kubelet[2675]: E0904 23:51:09.318556 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.318867 kubelet[2675]: E0904 23:51:09.318820 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.318867 kubelet[2675]: W0904 23:51:09.318836 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.318867 kubelet[2675]: E0904 23:51:09.318855 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.319140 kubelet[2675]: E0904 23:51:09.319119 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.319140 kubelet[2675]: W0904 23:51:09.319139 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.319194 kubelet[2675]: E0904 23:51:09.319157 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.319406 kubelet[2675]: E0904 23:51:09.319391 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.319406 kubelet[2675]: W0904 23:51:09.319402 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.319460 kubelet[2675]: E0904 23:51:09.319415 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.319630 kubelet[2675]: E0904 23:51:09.319616 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.319630 kubelet[2675]: W0904 23:51:09.319627 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.319674 kubelet[2675]: E0904 23:51:09.319638 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.319897 kubelet[2675]: E0904 23:51:09.319868 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.319897 kubelet[2675]: W0904 23:51:09.319890 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.319958 kubelet[2675]: E0904 23:51:09.319909 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.320223 kubelet[2675]: E0904 23:51:09.320194 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.320223 kubelet[2675]: W0904 23:51:09.320218 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.320286 kubelet[2675]: E0904 23:51:09.320233 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.320489 kubelet[2675]: E0904 23:51:09.320470 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.320489 kubelet[2675]: W0904 23:51:09.320487 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.320544 kubelet[2675]: E0904 23:51:09.320505 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.320880 kubelet[2675]: E0904 23:51:09.320845 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.320880 kubelet[2675]: W0904 23:51:09.320875 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.320966 kubelet[2675]: E0904 23:51:09.320910 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.321303 kubelet[2675]: E0904 23:51:09.321279 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.321303 kubelet[2675]: W0904 23:51:09.321299 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.321397 kubelet[2675]: E0904 23:51:09.321320 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.321565 kubelet[2675]: E0904 23:51:09.321547 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.321565 kubelet[2675]: W0904 23:51:09.321562 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.321633 kubelet[2675]: E0904 23:51:09.321579 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.321816 kubelet[2675]: E0904 23:51:09.321801 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.321816 kubelet[2675]: W0904 23:51:09.321812 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.321868 kubelet[2675]: E0904 23:51:09.321828 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.322090 kubelet[2675]: E0904 23:51:09.322074 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.322090 kubelet[2675]: W0904 23:51:09.322086 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.322158 kubelet[2675]: E0904 23:51:09.322116 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.322321 kubelet[2675]: E0904 23:51:09.322305 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.322321 kubelet[2675]: W0904 23:51:09.322317 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.322388 kubelet[2675]: E0904 23:51:09.322353 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.322546 kubelet[2675]: E0904 23:51:09.322528 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.322546 kubelet[2675]: W0904 23:51:09.322539 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.322663 kubelet[2675]: E0904 23:51:09.322554 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.322828 kubelet[2675]: E0904 23:51:09.322799 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.322828 kubelet[2675]: W0904 23:51:09.322814 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.322876 kubelet[2675]: E0904 23:51:09.322828 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.323121 kubelet[2675]: E0904 23:51:09.323105 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.323167 kubelet[2675]: W0904 23:51:09.323120 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.323167 kubelet[2675]: E0904 23:51:09.323133 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.323834 kubelet[2675]: E0904 23:51:09.323773 2675 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 23:51:09.323834 kubelet[2675]: W0904 23:51:09.323788 2675 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 23:51:09.323834 kubelet[2675]: E0904 23:51:09.323798 2675 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 23:51:09.391242 containerd[1500]: time="2025-09-04T23:51:09.390812274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:09.455297 containerd[1500]: time="2025-09-04T23:51:09.455163103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 4 23:51:09.505734 containerd[1500]: time="2025-09-04T23:51:09.505656299Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:09.540700 containerd[1500]: time="2025-09-04T23:51:09.540594419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:09.541815 containerd[1500]: time="2025-09-04T23:51:09.541733881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.529031286s" Sep 4 23:51:09.541952 containerd[1500]: time="2025-09-04T23:51:09.541826546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 4 23:51:09.545322 containerd[1500]: time="2025-09-04T23:51:09.545255099Z" level=info msg="CreateContainer within sandbox \"acc1c54dc5b19887225796e92abd1fbf55ebfb7ba1de23c6dace20f3dd56c5af\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 23:51:09.643637 containerd[1500]: time="2025-09-04T23:51:09.643282115Z" level=info msg="CreateContainer within sandbox \"acc1c54dc5b19887225796e92abd1fbf55ebfb7ba1de23c6dace20f3dd56c5af\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c16aac80b3b194e1cff9a79c412c537b0eddd553402a5dbb7f5c7267eca55d09\"" Sep 4 23:51:09.644407 containerd[1500]: time="2025-09-04T23:51:09.644356493Z" level=info msg="StartContainer for \"c16aac80b3b194e1cff9a79c412c537b0eddd553402a5dbb7f5c7267eca55d09\"" Sep 4 23:51:09.687479 systemd[1]: Started cri-containerd-c16aac80b3b194e1cff9a79c412c537b0eddd553402a5dbb7f5c7267eca55d09.scope - libcontainer container c16aac80b3b194e1cff9a79c412c537b0eddd553402a5dbb7f5c7267eca55d09. Sep 4 23:51:09.746960 containerd[1500]: time="2025-09-04T23:51:09.746838230Z" level=info msg="StartContainer for \"c16aac80b3b194e1cff9a79c412c537b0eddd553402a5dbb7f5c7267eca55d09\" returns successfully" Sep 4 23:51:09.761458 systemd[1]: cri-containerd-c16aac80b3b194e1cff9a79c412c537b0eddd553402a5dbb7f5c7267eca55d09.scope: Deactivated successfully. Sep 4 23:51:09.795967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c16aac80b3b194e1cff9a79c412c537b0eddd553402a5dbb7f5c7267eca55d09-rootfs.mount: Deactivated successfully. Sep 4 23:51:10.227288 kubelet[2675]: E0904 23:51:10.227219 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:10.283444 kubelet[2675]: E0904 23:51:10.283371 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:11.443762 containerd[1500]: time="2025-09-04T23:51:11.443618693Z" level=info msg="shim disconnected" id=c16aac80b3b194e1cff9a79c412c537b0eddd553402a5dbb7f5c7267eca55d09 namespace=k8s.io Sep 4 23:51:11.443762 containerd[1500]: time="2025-09-04T23:51:11.443731668Z" level=warning msg="cleaning up after shim disconnected" id=c16aac80b3b194e1cff9a79c412c537b0eddd553402a5dbb7f5c7267eca55d09 namespace=k8s.io Sep 4 23:51:11.443762 containerd[1500]: time="2025-09-04T23:51:11.443746896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:12.120003 kubelet[2675]: E0904 23:51:12.119864 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:12.291874 containerd[1500]: time="2025-09-04T23:51:12.291802927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 4 23:51:14.123773 kubelet[2675]: E0904 23:51:14.123708 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:16.120449 kubelet[2675]: E0904 23:51:16.120325 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:18.119838 kubelet[2675]: E0904 23:51:18.119740 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:20.119982 kubelet[2675]: E0904 23:51:20.119867 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:20.467979 containerd[1500]: time="2025-09-04T23:51:20.466874837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:20.469284 containerd[1500]: time="2025-09-04T23:51:20.469018553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 4 23:51:20.470599 containerd[1500]: time="2025-09-04T23:51:20.470532487Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:20.473964 containerd[1500]: time="2025-09-04T23:51:20.473876105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:20.474989 containerd[1500]: time="2025-09-04T23:51:20.474911955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 8.183039026s" Sep 4 23:51:20.475092 containerd[1500]: time="2025-09-04T23:51:20.475003368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 4 23:51:20.490828 containerd[1500]: time="2025-09-04T23:51:20.490734666Z" level=info msg="CreateContainer within sandbox \"acc1c54dc5b19887225796e92abd1fbf55ebfb7ba1de23c6dace20f3dd56c5af\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 23:51:20.518532 containerd[1500]: time="2025-09-04T23:51:20.518427209Z" level=info msg="CreateContainer within sandbox \"acc1c54dc5b19887225796e92abd1fbf55ebfb7ba1de23c6dace20f3dd56c5af\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6\"" Sep 4 23:51:20.519936 containerd[1500]: time="2025-09-04T23:51:20.519539043Z" level=info msg="StartContainer for \"1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6\"" Sep 4 23:51:20.563173 systemd[1]: run-containerd-runc-k8s.io-1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6-runc.Pgdrry.mount: Deactivated successfully. Sep 4 23:51:20.574352 systemd[1]: Started cri-containerd-1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6.scope - libcontainer container 1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6. Sep 4 23:51:20.627762 containerd[1500]: time="2025-09-04T23:51:20.627430646Z" level=info msg="StartContainer for \"1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6\" returns successfully" Sep 4 23:51:22.119815 kubelet[2675]: E0904 23:51:22.119726 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:22.882310 systemd[1]: cri-containerd-1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6.scope: Deactivated successfully. Sep 4 23:51:22.882956 systemd[1]: cri-containerd-1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6.scope: Consumed 921ms CPU time, 176.2M memory peak, 3.3M read from disk, 171.3M written to disk. Sep 4 23:51:22.914819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6-rootfs.mount: Deactivated successfully. Sep 4 23:51:22.931064 containerd[1500]: time="2025-09-04T23:51:22.930938348Z" level=info msg="shim disconnected" id=1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6 namespace=k8s.io Sep 4 23:51:22.931064 containerd[1500]: time="2025-09-04T23:51:22.931019652Z" level=warning msg="cleaning up after shim disconnected" id=1783a944658d5426ad5ecc0e86380bb156ec899304b29d102fff80cd266e58f6 namespace=k8s.io Sep 4 23:51:22.931064 containerd[1500]: time="2025-09-04T23:51:22.931056130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:51:22.984718 kubelet[2675]: I0904 23:51:22.984624 2675 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:51:23.054245 systemd[1]: Created slice kubepods-besteffort-poda8eddde7_2ee6_48c8_8135_d8f649b5e715.slice - libcontainer container kubepods-besteffort-poda8eddde7_2ee6_48c8_8135_d8f649b5e715.slice. Sep 4 23:51:23.065493 systemd[1]: Created slice kubepods-burstable-pod98a345f7_7f13_4446_8508_3763699201e1.slice - libcontainer container kubepods-burstable-pod98a345f7_7f13_4446_8508_3763699201e1.slice. Sep 4 23:51:23.080238 systemd[1]: Created slice kubepods-besteffort-pod210f206c_d2e9_4f37_8e9c_bb39c462e3a5.slice - libcontainer container kubepods-besteffort-pod210f206c_d2e9_4f37_8e9c_bb39c462e3a5.slice. Sep 4 23:51:23.088277 systemd[1]: Created slice kubepods-besteffort-poda94fd484_c0f8_40c8_aa5a_0e730f68f3a7.slice - libcontainer container kubepods-besteffort-poda94fd484_c0f8_40c8_aa5a_0e730f68f3a7.slice. Sep 4 23:51:23.095967 systemd[1]: Created slice kubepods-besteffort-pod4dda5f32_9863_42ac_8d05_03239bb6d11e.slice - libcontainer container kubepods-besteffort-pod4dda5f32_9863_42ac_8d05_03239bb6d11e.slice. Sep 4 23:51:23.102995 systemd[1]: Created slice kubepods-besteffort-pod518a1d0b_97e3_468a_b085_a8ed59e3b9de.slice - libcontainer container kubepods-besteffort-pod518a1d0b_97e3_468a_b085_a8ed59e3b9de.slice. Sep 4 23:51:23.112782 systemd[1]: Created slice kubepods-burstable-podeae99b8b_7303_450f_a486_e66279670f06.slice - libcontainer container kubepods-burstable-podeae99b8b_7303_450f_a486_e66279670f06.slice. Sep 4 23:51:23.124657 kubelet[2675]: I0904 23:51:23.124491 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65n52\" (UniqueName: \"kubernetes.io/projected/a94fd484-c0f8-40c8-aa5a-0e730f68f3a7-kube-api-access-65n52\") pod \"calico-apiserver-5f5f7b99c-lbsxp\" (UID: \"a94fd484-c0f8-40c8-aa5a-0e730f68f3a7\") " pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:23.124657 kubelet[2675]: I0904 23:51:23.124639 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrdf\" (UniqueName: \"kubernetes.io/projected/a8eddde7-2ee6-48c8-8135-d8f649b5e715-kube-api-access-ngrdf\") pod \"calico-kube-controllers-647fc84596-qpbf5\" (UID: \"a8eddde7-2ee6-48c8-8135-d8f649b5e715\") " pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:23.124657 kubelet[2675]: I0904 23:51:23.124672 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/518a1d0b-97e3-468a-b085-a8ed59e3b9de-calico-apiserver-certs\") pod \"calico-apiserver-5f5f7b99c-t4zg6\" (UID: \"518a1d0b-97e3-468a-b085-a8ed59e3b9de\") " pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:23.125406 kubelet[2675]: I0904 23:51:23.124699 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-whisker-backend-key-pair\") pod \"whisker-57d55db56d-ts945\" (UID: \"210f206c-d2e9-4f37-8e9c-bb39c462e3a5\") " pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:23.125406 kubelet[2675]: I0904 23:51:23.124724 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9rlb\" (UniqueName: \"kubernetes.io/projected/518a1d0b-97e3-468a-b085-a8ed59e3b9de-kube-api-access-g9rlb\") pod \"calico-apiserver-5f5f7b99c-t4zg6\" (UID: \"518a1d0b-97e3-468a-b085-a8ed59e3b9de\") " pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:23.125406 kubelet[2675]: I0904 23:51:23.124871 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzfvf\" (UniqueName: \"kubernetes.io/projected/98a345f7-7f13-4446-8508-3763699201e1-kube-api-access-xzfvf\") pod \"coredns-668d6bf9bc-ph5zt\" (UID: \"98a345f7-7f13-4446-8508-3763699201e1\") " pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:23.125406 kubelet[2675]: I0904 23:51:23.124901 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnf59\" (UniqueName: \"kubernetes.io/projected/4dda5f32-9863-42ac-8d05-03239bb6d11e-kube-api-access-vnf59\") pod \"goldmane-54d579b49d-w4bmm\" (UID: \"4dda5f32-9863-42ac-8d05-03239bb6d11e\") " pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:23.125406 kubelet[2675]: I0904 23:51:23.124930 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dda5f32-9863-42ac-8d05-03239bb6d11e-config\") pod \"goldmane-54d579b49d-w4bmm\" (UID: \"4dda5f32-9863-42ac-8d05-03239bb6d11e\") " pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:23.125645 kubelet[2675]: I0904 23:51:23.124955 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8eddde7-2ee6-48c8-8135-d8f649b5e715-tigera-ca-bundle\") pod \"calico-kube-controllers-647fc84596-qpbf5\" (UID: \"a8eddde7-2ee6-48c8-8135-d8f649b5e715\") " pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:23.125645 kubelet[2675]: I0904 23:51:23.124979 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a94fd484-c0f8-40c8-aa5a-0e730f68f3a7-calico-apiserver-certs\") pod \"calico-apiserver-5f5f7b99c-lbsxp\" (UID: \"a94fd484-c0f8-40c8-aa5a-0e730f68f3a7\") " pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:23.125645 kubelet[2675]: I0904 23:51:23.125000 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98a345f7-7f13-4446-8508-3763699201e1-config-volume\") pod \"coredns-668d6bf9bc-ph5zt\" (UID: \"98a345f7-7f13-4446-8508-3763699201e1\") " pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:23.125645 kubelet[2675]: I0904 23:51:23.125021 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4dda5f32-9863-42ac-8d05-03239bb6d11e-goldmane-key-pair\") pod \"goldmane-54d579b49d-w4bmm\" (UID: \"4dda5f32-9863-42ac-8d05-03239bb6d11e\") " pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:23.125645 kubelet[2675]: I0904 23:51:23.125069 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eae99b8b-7303-450f-a486-e66279670f06-config-volume\") pod \"coredns-668d6bf9bc-lr5vb\" (UID: \"eae99b8b-7303-450f-a486-e66279670f06\") " pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:23.125818 kubelet[2675]: I0904 23:51:23.125094 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4dda5f32-9863-42ac-8d05-03239bb6d11e-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-w4bmm\" (UID: \"4dda5f32-9863-42ac-8d05-03239bb6d11e\") " pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:23.125818 kubelet[2675]: I0904 23:51:23.125219 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8v5\" (UniqueName: \"kubernetes.io/projected/eae99b8b-7303-450f-a486-e66279670f06-kube-api-access-xs8v5\") pod \"coredns-668d6bf9bc-lr5vb\" (UID: \"eae99b8b-7303-450f-a486-e66279670f06\") " pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:23.125818 kubelet[2675]: I0904 23:51:23.125300 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-whisker-ca-bundle\") pod \"whisker-57d55db56d-ts945\" (UID: \"210f206c-d2e9-4f37-8e9c-bb39c462e3a5\") " pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:23.125818 kubelet[2675]: I0904 23:51:23.125328 2675 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmtkb\" (UniqueName: \"kubernetes.io/projected/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-kube-api-access-cmtkb\") pod \"whisker-57d55db56d-ts945\" (UID: \"210f206c-d2e9-4f37-8e9c-bb39c462e3a5\") " pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:23.320815 containerd[1500]: time="2025-09-04T23:51:23.320751865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 4 23:51:23.360921 containerd[1500]: time="2025-09-04T23:51:23.360851786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:0,}" Sep 4 23:51:23.371448 kubelet[2675]: E0904 23:51:23.371369 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:23.372149 containerd[1500]: time="2025-09-04T23:51:23.372078059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:0,}" Sep 4 23:51:23.385134 containerd[1500]: time="2025-09-04T23:51:23.385063158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:0,}" Sep 4 23:51:23.392911 containerd[1500]: time="2025-09-04T23:51:23.392835248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:0,}" Sep 4 23:51:23.402456 containerd[1500]: time="2025-09-04T23:51:23.402315909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:0,}" Sep 4 23:51:23.409328 containerd[1500]: time="2025-09-04T23:51:23.409272465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:0,}" Sep 4 23:51:23.416647 kubelet[2675]: E0904 23:51:23.416588 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:23.417182 containerd[1500]: time="2025-09-04T23:51:23.417136229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:0,}" Sep 4 23:51:24.132663 systemd[1]: Created slice kubepods-besteffort-pod22e67ab5_9d3d_4526_b31b_64c19a0aca9b.slice - libcontainer container kubepods-besteffort-pod22e67ab5_9d3d_4526_b31b_64c19a0aca9b.slice. Sep 4 23:51:24.137493 containerd[1500]: time="2025-09-04T23:51:24.137422193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:0,}" Sep 4 23:51:24.204360 containerd[1500]: time="2025-09-04T23:51:24.204243784Z" level=error msg="Failed to destroy network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.209651 containerd[1500]: time="2025-09-04T23:51:24.209510583Z" level=error msg="encountered an error cleaning up failed sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.209885 containerd[1500]: time="2025-09-04T23:51:24.209676898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.210553 kubelet[2675]: E0904 23:51:24.210465 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.211250 kubelet[2675]: E0904 23:51:24.210598 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:24.211250 kubelet[2675]: E0904 23:51:24.210636 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:24.211250 kubelet[2675]: E0904 23:51:24.210706 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" podUID="a94fd484-c0f8-40c8-aa5a-0e730f68f3a7" Sep 4 23:51:24.321688 kubelet[2675]: I0904 23:51:24.321633 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de" Sep 4 23:51:24.328086 containerd[1500]: time="2025-09-04T23:51:24.326363044Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\"" Sep 4 23:51:24.341602 containerd[1500]: time="2025-09-04T23:51:24.341508534Z" level=info msg="Ensure that sandbox 3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de in task-service has been cleanup successfully" Sep 4 23:51:24.341922 containerd[1500]: time="2025-09-04T23:51:24.341898361Z" level=info msg="TearDown network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" successfully" Sep 4 23:51:24.341922 containerd[1500]: time="2025-09-04T23:51:24.341919652Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" returns successfully" Sep 4 23:51:24.343283 containerd[1500]: time="2025-09-04T23:51:24.343220421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:1,}" Sep 4 23:51:24.346578 containerd[1500]: time="2025-09-04T23:51:24.346540260Z" level=error msg="Failed to destroy network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.347100 containerd[1500]: time="2025-09-04T23:51:24.347071824Z" level=error msg="encountered an error cleaning up failed sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.347180 containerd[1500]: time="2025-09-04T23:51:24.347158238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.347533 kubelet[2675]: E0904 23:51:24.347466 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.347738 kubelet[2675]: E0904 23:51:24.347564 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:24.347738 kubelet[2675]: E0904 23:51:24.347601 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:24.347738 kubelet[2675]: E0904 23:51:24.347678 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ph5zt_kube-system(98a345f7-7f13-4446-8508-3763699201e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ph5zt_kube-system(98a345f7-7f13-4446-8508-3763699201e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ph5zt" podUID="98a345f7-7f13-4446-8508-3763699201e1" Sep 4 23:51:24.468537 containerd[1500]: time="2025-09-04T23:51:24.468280783Z" level=error msg="Failed to destroy network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.469426 containerd[1500]: time="2025-09-04T23:51:24.468981248Z" level=error msg="encountered an error cleaning up failed sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.469426 containerd[1500]: time="2025-09-04T23:51:24.469120851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.469601 kubelet[2675]: E0904 23:51:24.469531 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.469694 kubelet[2675]: E0904 23:51:24.469633 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:24.469694 kubelet[2675]: E0904 23:51:24.469665 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:24.469775 kubelet[2675]: E0904 23:51:24.469733 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d55db56d-ts945_calico-system(210f206c-d2e9-4f37-8e9c-bb39c462e3a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d55db56d-ts945_calico-system(210f206c-d2e9-4f37-8e9c-bb39c462e3a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d55db56d-ts945" podUID="210f206c-d2e9-4f37-8e9c-bb39c462e3a5" Sep 4 23:51:24.571616 containerd[1500]: time="2025-09-04T23:51:24.571536652Z" level=error msg="Failed to destroy network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.572251 containerd[1500]: time="2025-09-04T23:51:24.572210376Z" level=error msg="encountered an error cleaning up failed sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.572325 containerd[1500]: time="2025-09-04T23:51:24.572289185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.572666 kubelet[2675]: E0904 23:51:24.572604 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.572748 kubelet[2675]: E0904 23:51:24.572702 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:24.572748 kubelet[2675]: E0904 23:51:24.572734 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:24.572829 kubelet[2675]: E0904 23:51:24.572802 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-647fc84596-qpbf5_calico-system(a8eddde7-2ee6-48c8-8135-d8f649b5e715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-647fc84596-qpbf5_calico-system(a8eddde7-2ee6-48c8-8135-d8f649b5e715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" podUID="a8eddde7-2ee6-48c8-8135-d8f649b5e715" Sep 4 23:51:24.617006 containerd[1500]: time="2025-09-04T23:51:24.616923219Z" level=error msg="Failed to destroy network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.617606 containerd[1500]: time="2025-09-04T23:51:24.617557769Z" level=error msg="encountered an error cleaning up failed sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.617672 containerd[1500]: time="2025-09-04T23:51:24.617641347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.618191 kubelet[2675]: E0904 23:51:24.617993 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.618191 kubelet[2675]: E0904 23:51:24.618150 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:24.618191 kubelet[2675]: E0904 23:51:24.618186 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:24.618351 kubelet[2675]: E0904 23:51:24.618259 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-w4bmm_calico-system(4dda5f32-9863-42ac-8d05-03239bb6d11e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-w4bmm_calico-system(4dda5f32-9863-42ac-8d05-03239bb6d11e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-w4bmm" podUID="4dda5f32-9863-42ac-8d05-03239bb6d11e" Sep 4 23:51:24.694785 containerd[1500]: time="2025-09-04T23:51:24.694706419Z" level=error msg="Failed to destroy network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.695251 containerd[1500]: time="2025-09-04T23:51:24.695211974Z" level=error msg="encountered an error cleaning up failed sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.695294 containerd[1500]: time="2025-09-04T23:51:24.695278661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.695636 kubelet[2675]: E0904 23:51:24.695575 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.695697 kubelet[2675]: E0904 23:51:24.695664 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:24.695726 kubelet[2675]: E0904 23:51:24.695699 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:24.695781 kubelet[2675]: E0904 23:51:24.695754 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5f7b99c-t4zg6_calico-apiserver(518a1d0b-97e3-468a-b085-a8ed59e3b9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5f7b99c-t4zg6_calico-apiserver(518a1d0b-97e3-468a-b085-a8ed59e3b9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" podUID="518a1d0b-97e3-468a-b085-a8ed59e3b9de" Sep 4 23:51:24.738180 containerd[1500]: time="2025-09-04T23:51:24.737955934Z" level=error msg="Failed to destroy network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.738523 containerd[1500]: time="2025-09-04T23:51:24.738469777Z" level=error msg="encountered an error cleaning up failed sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.738677 containerd[1500]: time="2025-09-04T23:51:24.738544167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.739418 kubelet[2675]: E0904 23:51:24.739013 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:24.739499 kubelet[2675]: E0904 23:51:24.739435 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:24.739499 kubelet[2675]: E0904 23:51:24.739464 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:24.739584 kubelet[2675]: E0904 23:51:24.739539 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lr5vb_kube-system(eae99b8b-7303-450f-a486-e66279670f06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lr5vb_kube-system(eae99b8b-7303-450f-a486-e66279670f06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lr5vb" podUID="eae99b8b-7303-450f-a486-e66279670f06" Sep 4 23:51:24.917242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562-shm.mount: Deactivated successfully. Sep 4 23:51:24.917405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961-shm.mount: Deactivated successfully. Sep 4 23:51:24.917519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18-shm.mount: Deactivated successfully. Sep 4 23:51:24.917619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81-shm.mount: Deactivated successfully. Sep 4 23:51:24.917724 systemd[1]: run-netns-cni\x2dd2db6443\x2d6caf\x2d81d3\x2d80f9\x2d77138e0ea15f.mount: Deactivated successfully. Sep 4 23:51:24.917828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de-shm.mount: Deactivated successfully. Sep 4 23:51:25.325199 kubelet[2675]: I0904 23:51:25.324163 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5" Sep 4 23:51:25.325759 containerd[1500]: time="2025-09-04T23:51:25.324954316Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\"" Sep 4 23:51:25.332437 kubelet[2675]: I0904 23:51:25.331844 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18" Sep 4 23:51:25.333210 containerd[1500]: time="2025-09-04T23:51:25.333173697Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\"" Sep 4 23:51:25.335166 containerd[1500]: time="2025-09-04T23:51:25.334632054Z" level=info msg="Ensure that sandbox 8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5 in task-service has been cleanup successfully" Sep 4 23:51:25.341614 containerd[1500]: time="2025-09-04T23:51:25.341429837Z" level=info msg="TearDown network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" successfully" Sep 4 23:51:25.341614 containerd[1500]: time="2025-09-04T23:51:25.341480633Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" returns successfully" Sep 4 23:51:25.345181 containerd[1500]: time="2025-09-04T23:51:25.342242714Z" level=info msg="Ensure that sandbox d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18 in task-service has been cleanup successfully" Sep 4 23:51:25.345539 systemd[1]: run-netns-cni\x2d0b350bf6\x2d5cf1\x2d98a0\x2d596f\x2dde22e29d9443.mount: Deactivated successfully. Sep 4 23:51:25.351577 containerd[1500]: time="2025-09-04T23:51:25.351526977Z" level=info msg="TearDown network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" successfully" Sep 4 23:51:25.351690 containerd[1500]: time="2025-09-04T23:51:25.351669367Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" returns successfully" Sep 4 23:51:25.357412 containerd[1500]: time="2025-09-04T23:51:25.357360328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:1,}" Sep 4 23:51:25.358634 containerd[1500]: time="2025-09-04T23:51:25.358072073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:1,}" Sep 4 23:51:25.359586 kubelet[2675]: I0904 23:51:25.359064 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d" Sep 4 23:51:25.360437 containerd[1500]: time="2025-09-04T23:51:25.360407548Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\"" Sep 4 23:51:25.377772 containerd[1500]: time="2025-09-04T23:51:25.361532455Z" level=info msg="Ensure that sandbox 58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d in task-service has been cleanup successfully" Sep 4 23:51:25.377772 containerd[1500]: time="2025-09-04T23:51:25.372994077Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\"" Sep 4 23:51:25.377772 containerd[1500]: time="2025-09-04T23:51:25.373284585Z" level=info msg="Ensure that sandbox fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81 in task-service has been cleanup successfully" Sep 4 23:51:25.377987 kubelet[2675]: I0904 23:51:25.367819 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81" Sep 4 23:51:25.377987 kubelet[2675]: I0904 23:51:25.374483 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562" Sep 4 23:51:25.363616 systemd[1]: run-netns-cni\x2d1283da2b\x2d1b89\x2dcc0f\x2d94a5\x2dc5771771b74d.mount: Deactivated successfully. Sep 4 23:51:25.375901 systemd[1]: run-netns-cni\x2d1ad0e558\x2dc35c\x2d80d9\x2dbf20\x2da7fa8eb59570.mount: Deactivated successfully. Sep 4 23:51:25.381107 containerd[1500]: time="2025-09-04T23:51:25.380696610Z" level=info msg="TearDown network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" successfully" Sep 4 23:51:25.381107 containerd[1500]: time="2025-09-04T23:51:25.380749420Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" returns successfully" Sep 4 23:51:25.384231 kubelet[2675]: E0904 23:51:25.383841 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:25.392582 containerd[1500]: time="2025-09-04T23:51:25.384502485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:1,}" Sep 4 23:51:25.392582 containerd[1500]: time="2025-09-04T23:51:25.388769703Z" level=info msg="TearDown network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" successfully" Sep 4 23:51:25.392582 containerd[1500]: time="2025-09-04T23:51:25.388832242Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" returns successfully" Sep 4 23:51:25.392582 containerd[1500]: time="2025-09-04T23:51:25.389923004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:1,}" Sep 4 23:51:25.392898 kubelet[2675]: E0904 23:51:25.389364 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:25.394006 containerd[1500]: time="2025-09-04T23:51:25.393964606Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\"" Sep 4 23:51:25.394551 containerd[1500]: time="2025-09-04T23:51:25.394406030Z" level=info msg="Ensure that sandbox 48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562 in task-service has been cleanup successfully" Sep 4 23:51:25.397616 containerd[1500]: time="2025-09-04T23:51:25.397450627Z" level=info msg="TearDown network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" successfully" Sep 4 23:51:25.397616 containerd[1500]: time="2025-09-04T23:51:25.397492666Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" returns successfully" Sep 4 23:51:25.397770 kubelet[2675]: I0904 23:51:25.397559 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961" Sep 4 23:51:25.402532 containerd[1500]: time="2025-09-04T23:51:25.400420923Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\"" Sep 4 23:51:25.402532 containerd[1500]: time="2025-09-04T23:51:25.400657730Z" level=info msg="Ensure that sandbox 827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961 in task-service has been cleanup successfully" Sep 4 23:51:25.402532 containerd[1500]: time="2025-09-04T23:51:25.402057998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:1,}" Sep 4 23:51:25.408139 containerd[1500]: time="2025-09-04T23:51:25.407926183Z" level=info msg="TearDown network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" successfully" Sep 4 23:51:25.408139 containerd[1500]: time="2025-09-04T23:51:25.407977500Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" returns successfully" Sep 4 23:51:25.412954 containerd[1500]: time="2025-09-04T23:51:25.412545287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:1,}" Sep 4 23:51:25.556493 containerd[1500]: time="2025-09-04T23:51:25.554585064Z" level=error msg="Failed to destroy network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:25.556493 containerd[1500]: time="2025-09-04T23:51:25.555165142Z" level=error msg="encountered an error cleaning up failed sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:25.556493 containerd[1500]: time="2025-09-04T23:51:25.555243159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:25.560374 kubelet[2675]: E0904 23:51:25.557700 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:25.560374 kubelet[2675]: E0904 23:51:25.557768 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:51:25.560374 kubelet[2675]: E0904 23:51:25.557920 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:51:25.560500 kubelet[2675]: E0904 23:51:25.557991 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t8wcm_calico-system(22e67ab5-9d3d-4526-b31b-64c19a0aca9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t8wcm_calico-system(22e67ab5-9d3d-4526-b31b-64c19a0aca9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:25.928251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328-shm.mount: Deactivated successfully. Sep 4 23:51:25.928424 systemd[1]: run-netns-cni\x2dafa232fd\x2dbab3\x2d1c52\x2d2542\x2d7063bc0a25c3.mount: Deactivated successfully. Sep 4 23:51:25.928537 systemd[1]: run-netns-cni\x2d49133a86\x2dd775\x2d7867\x2db717\x2d1d9976c4ffe4.mount: Deactivated successfully. Sep 4 23:51:25.928644 systemd[1]: run-netns-cni\x2d4941b217\x2d7b77\x2d1cfc\x2dfac0\x2d8fe7ba252d5d.mount: Deactivated successfully. Sep 4 23:51:26.199252 containerd[1500]: time="2025-09-04T23:51:26.198386400Z" level=error msg="Failed to destroy network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.214300 containerd[1500]: time="2025-09-04T23:51:26.213056513Z" level=error msg="encountered an error cleaning up failed sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.214300 containerd[1500]: time="2025-09-04T23:51:26.213162302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.215080 kubelet[2675]: E0904 23:51:26.213429 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.215080 kubelet[2675]: E0904 23:51:26.213513 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:26.215080 kubelet[2675]: E0904 23:51:26.213540 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:26.215231 kubelet[2675]: E0904 23:51:26.213596 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" podUID="a94fd484-c0f8-40c8-aa5a-0e730f68f3a7" Sep 4 23:51:26.224841 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe-shm.mount: Deactivated successfully. Sep 4 23:51:26.414222 kubelet[2675]: I0904 23:51:26.414147 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328" Sep 4 23:51:26.420151 containerd[1500]: time="2025-09-04T23:51:26.418958448Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\"" Sep 4 23:51:26.420151 containerd[1500]: time="2025-09-04T23:51:26.420021607Z" level=info msg="Ensure that sandbox 5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328 in task-service has been cleanup successfully" Sep 4 23:51:26.423623 kubelet[2675]: I0904 23:51:26.423019 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe" Sep 4 23:51:26.429254 containerd[1500]: time="2025-09-04T23:51:26.429188958Z" level=info msg="TearDown network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" successfully" Sep 4 23:51:26.429254 containerd[1500]: time="2025-09-04T23:51:26.429233963Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" returns successfully" Sep 4 23:51:26.429782 containerd[1500]: time="2025-09-04T23:51:26.429632336Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\"" Sep 4 23:51:26.432177 containerd[1500]: time="2025-09-04T23:51:26.429983199Z" level=info msg="Ensure that sandbox 964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe in task-service has been cleanup successfully" Sep 4 23:51:26.432833 containerd[1500]: time="2025-09-04T23:51:26.432534082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:1,}" Sep 4 23:51:26.433626 containerd[1500]: time="2025-09-04T23:51:26.433478987Z" level=info msg="TearDown network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" successfully" Sep 4 23:51:26.433626 containerd[1500]: time="2025-09-04T23:51:26.433511790Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" returns successfully" Sep 4 23:51:26.442773 containerd[1500]: time="2025-09-04T23:51:26.441668259Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\"" Sep 4 23:51:26.442773 containerd[1500]: time="2025-09-04T23:51:26.441818242Z" level=info msg="TearDown network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" successfully" Sep 4 23:51:26.442773 containerd[1500]: time="2025-09-04T23:51:26.441879157Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" returns successfully" Sep 4 23:51:26.444542 containerd[1500]: time="2025-09-04T23:51:26.444514039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:2,}" Sep 4 23:51:26.668850 containerd[1500]: time="2025-09-04T23:51:26.668772856Z" level=error msg="Failed to destroy network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.675321 containerd[1500]: time="2025-09-04T23:51:26.675247456Z" level=error msg="encountered an error cleaning up failed sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.675929 containerd[1500]: time="2025-09-04T23:51:26.675777548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.676498 kubelet[2675]: E0904 23:51:26.676284 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.676966 kubelet[2675]: E0904 23:51:26.676839 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:26.676966 kubelet[2675]: E0904 23:51:26.676909 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:26.677856 kubelet[2675]: E0904 23:51:26.677136 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ph5zt_kube-system(98a345f7-7f13-4446-8508-3763699201e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ph5zt_kube-system(98a345f7-7f13-4446-8508-3763699201e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ph5zt" podUID="98a345f7-7f13-4446-8508-3763699201e1" Sep 4 23:51:26.718631 containerd[1500]: time="2025-09-04T23:51:26.718332897Z" level=error msg="Failed to destroy network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.724391 containerd[1500]: time="2025-09-04T23:51:26.724101282Z" level=error msg="encountered an error cleaning up failed sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.724391 containerd[1500]: time="2025-09-04T23:51:26.724218163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.725402 kubelet[2675]: E0904 23:51:26.725119 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.725402 kubelet[2675]: E0904 23:51:26.725301 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:26.725402 kubelet[2675]: E0904 23:51:26.725344 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:26.726424 kubelet[2675]: E0904 23:51:26.725752 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5f7b99c-t4zg6_calico-apiserver(518a1d0b-97e3-468a-b085-a8ed59e3b9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5f7b99c-t4zg6_calico-apiserver(518a1d0b-97e3-468a-b085-a8ed59e3b9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" podUID="518a1d0b-97e3-468a-b085-a8ed59e3b9de" Sep 4 23:51:26.733968 containerd[1500]: time="2025-09-04T23:51:26.733899815Z" level=error msg="Failed to destroy network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.735430 containerd[1500]: time="2025-09-04T23:51:26.735371146Z" level=error msg="encountered an error cleaning up failed sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.735727 containerd[1500]: time="2025-09-04T23:51:26.735649383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.736267 kubelet[2675]: E0904 23:51:26.736213 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.736433 kubelet[2675]: E0904 23:51:26.736402 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:26.736647 kubelet[2675]: E0904 23:51:26.736579 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:26.736838 kubelet[2675]: E0904 23:51:26.736804 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lr5vb_kube-system(eae99b8b-7303-450f-a486-e66279670f06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lr5vb_kube-system(eae99b8b-7303-450f-a486-e66279670f06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lr5vb" podUID="eae99b8b-7303-450f-a486-e66279670f06" Sep 4 23:51:26.763219 containerd[1500]: time="2025-09-04T23:51:26.763074418Z" level=error msg="Failed to destroy network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.767883 containerd[1500]: time="2025-09-04T23:51:26.767792618Z" level=error msg="encountered an error cleaning up failed sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.768294 containerd[1500]: time="2025-09-04T23:51:26.768260702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.769938 kubelet[2675]: E0904 23:51:26.768798 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.769938 kubelet[2675]: E0904 23:51:26.768878 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:26.769938 kubelet[2675]: E0904 23:51:26.768906 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:26.770116 kubelet[2675]: E0904 23:51:26.768959 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d55db56d-ts945_calico-system(210f206c-d2e9-4f37-8e9c-bb39c462e3a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d55db56d-ts945_calico-system(210f206c-d2e9-4f37-8e9c-bb39c462e3a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d55db56d-ts945" podUID="210f206c-d2e9-4f37-8e9c-bb39c462e3a5" Sep 4 23:51:26.790198 containerd[1500]: time="2025-09-04T23:51:26.790132830Z" level=error msg="Failed to destroy network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.790882 containerd[1500]: time="2025-09-04T23:51:26.790850366Z" level=error msg="encountered an error cleaning up failed sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.795026 containerd[1500]: time="2025-09-04T23:51:26.791034705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.795445 containerd[1500]: time="2025-09-04T23:51:26.795388977Z" level=error msg="Failed to destroy network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.795624 kubelet[2675]: E0904 23:51:26.795552 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.795887 kubelet[2675]: E0904 23:51:26.795650 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:26.795887 kubelet[2675]: E0904 23:51:26.795676 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:26.795887 kubelet[2675]: E0904 23:51:26.795740 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-w4bmm_calico-system(4dda5f32-9863-42ac-8d05-03239bb6d11e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-w4bmm_calico-system(4dda5f32-9863-42ac-8d05-03239bb6d11e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-w4bmm" podUID="4dda5f32-9863-42ac-8d05-03239bb6d11e" Sep 4 23:51:26.796929 containerd[1500]: time="2025-09-04T23:51:26.796720734Z" level=error msg="encountered an error cleaning up failed sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.796929 containerd[1500]: time="2025-09-04T23:51:26.796795024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.797401 kubelet[2675]: E0904 23:51:26.797187 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.797401 kubelet[2675]: E0904 23:51:26.797237 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:26.797401 kubelet[2675]: E0904 23:51:26.797262 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:26.797615 kubelet[2675]: E0904 23:51:26.797304 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-647fc84596-qpbf5_calico-system(a8eddde7-2ee6-48c8-8135-d8f649b5e715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-647fc84596-qpbf5_calico-system(a8eddde7-2ee6-48c8-8135-d8f649b5e715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" podUID="a8eddde7-2ee6-48c8-8135-d8f649b5e715" Sep 4 23:51:26.853143 containerd[1500]: time="2025-09-04T23:51:26.853050292Z" level=error msg="Failed to destroy network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.854178 containerd[1500]: time="2025-09-04T23:51:26.854122669Z" level=error msg="Failed to destroy network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.854664 containerd[1500]: time="2025-09-04T23:51:26.854623646Z" level=error msg="encountered an error cleaning up failed sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.854760 containerd[1500]: time="2025-09-04T23:51:26.854709588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.854872 containerd[1500]: time="2025-09-04T23:51:26.854826299Z" level=error msg="encountered an error cleaning up failed sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.854915 containerd[1500]: time="2025-09-04T23:51:26.854877516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.855159 kubelet[2675]: E0904 23:51:26.855029 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.855673 kubelet[2675]: E0904 23:51:26.855448 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:26.855673 kubelet[2675]: E0904 23:51:26.855506 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:26.855673 kubelet[2675]: E0904 23:51:26.855581 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" podUID="a94fd484-c0f8-40c8-aa5a-0e730f68f3a7" Sep 4 23:51:26.858674 kubelet[2675]: E0904 23:51:26.858634 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:26.859064 kubelet[2675]: E0904 23:51:26.858844 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:51:26.859064 kubelet[2675]: E0904 23:51:26.858907 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:51:26.863327 kubelet[2675]: E0904 23:51:26.859014 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t8wcm_calico-system(22e67ab5-9d3d-4526-b31b-64c19a0aca9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t8wcm_calico-system(22e67ab5-9d3d-4526-b31b-64c19a0aca9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:26.934605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c-shm.mount: Deactivated successfully. Sep 4 23:51:26.934756 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501-shm.mount: Deactivated successfully. Sep 4 23:51:26.934874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517-shm.mount: Deactivated successfully. Sep 4 23:51:26.934973 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555-shm.mount: Deactivated successfully. Sep 4 23:51:26.935090 systemd[1]: run-netns-cni\x2dfe701b58\x2d2816\x2dd34f\x2dfb32\x2dc55bcf831ccb.mount: Deactivated successfully. Sep 4 23:51:26.935183 systemd[1]: run-netns-cni\x2dc61e4975\x2d627d\x2d8f15\x2d8232\x2debc93b3ecc62.mount: Deactivated successfully. Sep 4 23:51:27.434222 kubelet[2675]: I0904 23:51:27.433805 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c" Sep 4 23:51:27.439808 containerd[1500]: time="2025-09-04T23:51:27.434541035Z" level=info msg="StopPodSandbox for \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\"" Sep 4 23:51:27.439808 containerd[1500]: time="2025-09-04T23:51:27.434783813Z" level=info msg="Ensure that sandbox 5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c in task-service has been cleanup successfully" Sep 4 23:51:27.439808 containerd[1500]: time="2025-09-04T23:51:27.439201282Z" level=info msg="TearDown network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" successfully" Sep 4 23:51:27.439808 containerd[1500]: time="2025-09-04T23:51:27.439235958Z" level=info msg="StopPodSandbox for \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" returns successfully" Sep 4 23:51:27.440746 containerd[1500]: time="2025-09-04T23:51:27.439861871Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\"" Sep 4 23:51:27.445537 containerd[1500]: time="2025-09-04T23:51:27.441487314Z" level=info msg="TearDown network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" successfully" Sep 4 23:51:27.445537 containerd[1500]: time="2025-09-04T23:51:27.441515777Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" returns successfully" Sep 4 23:51:27.445537 containerd[1500]: time="2025-09-04T23:51:27.444285663Z" level=info msg="StopPodSandbox for \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\"" Sep 4 23:51:27.445537 containerd[1500]: time="2025-09-04T23:51:27.444556976Z" level=info msg="Ensure that sandbox 8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07 in task-service has been cleanup successfully" Sep 4 23:51:27.445537 containerd[1500]: time="2025-09-04T23:51:27.444868955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:2,}" Sep 4 23:51:27.446195 kubelet[2675]: I0904 23:51:27.443604 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07" Sep 4 23:51:27.446843 systemd[1]: run-netns-cni\x2db7015dd8\x2dc073\x2df3b5\x2dd788\x2d4e612367dab5.mount: Deactivated successfully. Sep 4 23:51:27.448486 containerd[1500]: time="2025-09-04T23:51:27.448431529Z" level=info msg="TearDown network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" successfully" Sep 4 23:51:27.448486 containerd[1500]: time="2025-09-04T23:51:27.448483648Z" level=info msg="StopPodSandbox for \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" returns successfully" Sep 4 23:51:27.452373 containerd[1500]: time="2025-09-04T23:51:27.452316632Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\"" Sep 4 23:51:27.453439 containerd[1500]: time="2025-09-04T23:51:27.452443612Z" level=info msg="TearDown network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" successfully" Sep 4 23:51:27.453439 containerd[1500]: time="2025-09-04T23:51:27.452459433Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" returns successfully" Sep 4 23:51:27.456106 containerd[1500]: time="2025-09-04T23:51:27.455897301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:2,}" Sep 4 23:51:27.456611 kubelet[2675]: I0904 23:51:27.456578 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed" Sep 4 23:51:27.457133 containerd[1500]: time="2025-09-04T23:51:27.457109552Z" level=info msg="StopPodSandbox for \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\"" Sep 4 23:51:27.457875 containerd[1500]: time="2025-09-04T23:51:27.457706921Z" level=info msg="Ensure that sandbox 85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed in task-service has been cleanup successfully" Sep 4 23:51:27.457895 systemd[1]: run-netns-cni\x2dcbad0e01\x2ddd54\x2d1232\x2d7559\x2d3428b58d8ddd.mount: Deactivated successfully. Sep 4 23:51:27.463684 kubelet[2675]: I0904 23:51:27.460441 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555" Sep 4 23:51:27.465515 containerd[1500]: time="2025-09-04T23:51:27.465468383Z" level=info msg="StopPodSandbox for \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\"" Sep 4 23:51:27.465611 containerd[1500]: time="2025-09-04T23:51:27.465499210Z" level=info msg="TearDown network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" successfully" Sep 4 23:51:27.465846 containerd[1500]: time="2025-09-04T23:51:27.465817692Z" level=info msg="StopPodSandbox for \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" returns successfully" Sep 4 23:51:27.466920 containerd[1500]: time="2025-09-04T23:51:27.465917150Z" level=info msg="Ensure that sandbox 7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555 in task-service has been cleanup successfully" Sep 4 23:51:27.469858 containerd[1500]: time="2025-09-04T23:51:27.469788448Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\"" Sep 4 23:51:27.482846 containerd[1500]: time="2025-09-04T23:51:27.469957366Z" level=info msg="TearDown network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" successfully" Sep 4 23:51:27.482846 containerd[1500]: time="2025-09-04T23:51:27.469989698Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" returns successfully" Sep 4 23:51:27.482846 containerd[1500]: time="2025-09-04T23:51:27.470776515Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\"" Sep 4 23:51:27.482846 containerd[1500]: time="2025-09-04T23:51:27.470873578Z" level=info msg="TearDown network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" successfully" Sep 4 23:51:27.482846 containerd[1500]: time="2025-09-04T23:51:27.470886263Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" returns successfully" Sep 4 23:51:27.482846 containerd[1500]: time="2025-09-04T23:51:27.474684201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:3,}" Sep 4 23:51:27.482846 containerd[1500]: time="2025-09-04T23:51:27.479789872Z" level=info msg="StopPodSandbox for \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\"" Sep 4 23:51:27.482846 containerd[1500]: time="2025-09-04T23:51:27.482549398Z" level=info msg="TearDown network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" successfully" Sep 4 23:51:27.482846 containerd[1500]: time="2025-09-04T23:51:27.482580897Z" level=info msg="StopPodSandbox for \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" returns successfully" Sep 4 23:51:27.485204 kubelet[2675]: I0904 23:51:27.478927 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517" Sep 4 23:51:27.472052 systemd[1]: run-netns-cni\x2d12cc01a3\x2de4ed\x2d43b6\x2d3818\x2df63108dbe94a.mount: Deactivated successfully. Sep 4 23:51:27.487731 kubelet[2675]: I0904 23:51:27.486786 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc" Sep 4 23:51:27.493111 containerd[1500]: time="2025-09-04T23:51:27.485058170Z" level=info msg="Ensure that sandbox 2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517 in task-service has been cleanup successfully" Sep 4 23:51:27.493205 kubelet[2675]: I0904 23:51:27.491782 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b" Sep 4 23:51:27.493558 systemd[1]: run-netns-cni\x2dfa5162de\x2d2c4d\x2d0989\x2d044a\x2d789ee586eb7a.mount: Deactivated successfully. Sep 4 23:51:27.495067 containerd[1500]: time="2025-09-04T23:51:27.486114036Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\"" Sep 4 23:51:27.495067 containerd[1500]: time="2025-09-04T23:51:27.494613871Z" level=info msg="TearDown network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" successfully" Sep 4 23:51:27.495067 containerd[1500]: time="2025-09-04T23:51:27.494629641Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" returns successfully" Sep 4 23:51:27.495067 containerd[1500]: time="2025-09-04T23:51:27.494715994Z" level=info msg="StopPodSandbox for \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\"" Sep 4 23:51:27.495067 containerd[1500]: time="2025-09-04T23:51:27.489457535Z" level=info msg="StopPodSandbox for \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\"" Sep 4 23:51:27.495067 containerd[1500]: time="2025-09-04T23:51:27.494924629Z" level=info msg="Ensure that sandbox a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc in task-service has been cleanup successfully" Sep 4 23:51:27.503715 containerd[1500]: time="2025-09-04T23:51:27.499004279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:2,}" Sep 4 23:51:27.505360 containerd[1500]: time="2025-09-04T23:51:27.502742626Z" level=info msg="TearDown network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" successfully" Sep 4 23:51:27.505360 containerd[1500]: time="2025-09-04T23:51:27.504594756Z" level=info msg="StopPodSandbox for \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" returns successfully" Sep 4 23:51:27.505360 containerd[1500]: time="2025-09-04T23:51:27.504169774Z" level=info msg="Ensure that sandbox 4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b in task-service has been cleanup successfully" Sep 4 23:51:27.506254 containerd[1500]: time="2025-09-04T23:51:27.505737467Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\"" Sep 4 23:51:27.506254 containerd[1500]: time="2025-09-04T23:51:27.505843988Z" level=info msg="TearDown network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" successfully" Sep 4 23:51:27.506254 containerd[1500]: time="2025-09-04T23:51:27.505857113Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" returns successfully" Sep 4 23:51:27.506680 containerd[1500]: time="2025-09-04T23:51:27.506592854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:2,}" Sep 4 23:51:27.509656 kubelet[2675]: I0904 23:51:27.509335 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501" Sep 4 23:51:27.509806 containerd[1500]: time="2025-09-04T23:51:27.509773816Z" level=info msg="TearDown network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" successfully" Sep 4 23:51:27.509980 containerd[1500]: time="2025-09-04T23:51:27.509869257Z" level=info msg="StopPodSandbox for \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" returns successfully" Sep 4 23:51:27.509980 containerd[1500]: time="2025-09-04T23:51:27.510775850Z" level=info msg="TearDown network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" successfully" Sep 4 23:51:27.509980 containerd[1500]: time="2025-09-04T23:51:27.510794906Z" level=info msg="StopPodSandbox for \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" returns successfully" Sep 4 23:51:27.511115 containerd[1500]: time="2025-09-04T23:51:27.511085996Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\"" Sep 4 23:51:27.511333 containerd[1500]: time="2025-09-04T23:51:27.511313477Z" level=info msg="TearDown network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" successfully" Sep 4 23:51:27.511398 containerd[1500]: time="2025-09-04T23:51:27.511383269Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" returns successfully" Sep 4 23:51:27.513384 containerd[1500]: time="2025-09-04T23:51:27.512525227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:2,}" Sep 4 23:51:27.513384 containerd[1500]: time="2025-09-04T23:51:27.512754691Z" level=info msg="StopPodSandbox for \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\"" Sep 4 23:51:27.513790 containerd[1500]: time="2025-09-04T23:51:27.513768317Z" level=info msg="Ensure that sandbox a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501 in task-service has been cleanup successfully" Sep 4 23:51:27.520020 containerd[1500]: time="2025-09-04T23:51:27.519834272Z" level=info msg="TearDown network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" successfully" Sep 4 23:51:27.520020 containerd[1500]: time="2025-09-04T23:51:27.519942277Z" level=info msg="StopPodSandbox for \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" returns successfully" Sep 4 23:51:27.525949 containerd[1500]: time="2025-09-04T23:51:27.522361859Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\"" Sep 4 23:51:27.525949 containerd[1500]: time="2025-09-04T23:51:27.522530278Z" level=info msg="TearDown network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" successfully" Sep 4 23:51:27.525949 containerd[1500]: time="2025-09-04T23:51:27.522551237Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" returns successfully" Sep 4 23:51:27.525949 containerd[1500]: time="2025-09-04T23:51:27.522969288Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\"" Sep 4 23:51:27.525949 containerd[1500]: time="2025-09-04T23:51:27.523411234Z" level=info msg="TearDown network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" successfully" Sep 4 23:51:27.525949 containerd[1500]: time="2025-09-04T23:51:27.523430850Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" returns successfully" Sep 4 23:51:27.527344 kubelet[2675]: E0904 23:51:27.523936 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:27.527344 kubelet[2675]: E0904 23:51:27.525366 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:27.528450 containerd[1500]: time="2025-09-04T23:51:27.528420602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:2,}" Sep 4 23:51:27.529012 containerd[1500]: time="2025-09-04T23:51:27.528592526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:2,}" Sep 4 23:51:27.917820 systemd[1]: run-netns-cni\x2dc76af2da\x2da4d7\x2d06cf\x2d2970\x2d1dd6b460f961.mount: Deactivated successfully. Sep 4 23:51:27.917969 systemd[1]: run-netns-cni\x2dfc468590\x2ddf79\x2d445c\x2dd8d2\x2d5219e0ae69b9.mount: Deactivated successfully. Sep 4 23:51:27.918102 systemd[1]: run-netns-cni\x2d5912a088\x2dc7cf\x2d292e\x2d9a14\x2dc41621646b95.mount: Deactivated successfully. Sep 4 23:51:27.918226 systemd[1]: run-netns-cni\x2d50b9043c\x2ddfd9\x2d349b\x2d8491\x2d95f4aeb79c72.mount: Deactivated successfully. Sep 4 23:51:35.947684 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:54114.service - OpenSSH per-connection server daemon (10.0.0.1:54114). Sep 4 23:51:37.092583 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 54114 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:51:37.102752 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:37.121814 kubelet[2675]: E0904 23:51:37.120851 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:37.137826 systemd-logind[1485]: New session 8 of user core. Sep 4 23:51:37.142248 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:51:37.508237 sshd[4080]: Connection closed by 10.0.0.1 port 54114 Sep 4 23:51:37.509364 sshd-session[4078]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:37.516571 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:54114.service: Deactivated successfully. Sep 4 23:51:37.522154 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:51:37.524467 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:51:37.526754 systemd-logind[1485]: Removed session 8. Sep 4 23:51:37.640806 containerd[1500]: time="2025-09-04T23:51:37.640729528Z" level=error msg="Failed to destroy network for sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.644163 containerd[1500]: time="2025-09-04T23:51:37.642215954Z" level=error msg="encountered an error cleaning up failed sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.644163 containerd[1500]: time="2025-09-04T23:51:37.642300634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.644430 kubelet[2675]: E0904 23:51:37.642605 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.644430 kubelet[2675]: E0904 23:51:37.642689 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:37.644430 kubelet[2675]: E0904 23:51:37.642719 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:37.644566 kubelet[2675]: E0904 23:51:37.642792 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-w4bmm_calico-system(4dda5f32-9863-42ac-8d05-03239bb6d11e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-w4bmm_calico-system(4dda5f32-9863-42ac-8d05-03239bb6d11e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-w4bmm" podUID="4dda5f32-9863-42ac-8d05-03239bb6d11e" Sep 4 23:51:37.722762 containerd[1500]: time="2025-09-04T23:51:37.722647996Z" level=error msg="Failed to destroy network for sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.723584 containerd[1500]: time="2025-09-04T23:51:37.723553255Z" level=error msg="encountered an error cleaning up failed sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.724126 containerd[1500]: time="2025-09-04T23:51:37.724084538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.724692 kubelet[2675]: E0904 23:51:37.724643 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.724922 kubelet[2675]: E0904 23:51:37.724897 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:37.725153 kubelet[2675]: E0904 23:51:37.725125 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:37.725297 kubelet[2675]: E0904 23:51:37.725264 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d55db56d-ts945_calico-system(210f206c-d2e9-4f37-8e9c-bb39c462e3a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d55db56d-ts945_calico-system(210f206c-d2e9-4f37-8e9c-bb39c462e3a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d55db56d-ts945" podUID="210f206c-d2e9-4f37-8e9c-bb39c462e3a5" Sep 4 23:51:37.729984 containerd[1500]: time="2025-09-04T23:51:37.729909143Z" level=error msg="Failed to destroy network for sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.731899 containerd[1500]: time="2025-09-04T23:51:37.731855688Z" level=error msg="encountered an error cleaning up failed sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.732614 containerd[1500]: time="2025-09-04T23:51:37.732582761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.733758 kubelet[2675]: E0904 23:51:37.733466 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.733758 kubelet[2675]: E0904 23:51:37.733571 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:37.733758 kubelet[2675]: E0904 23:51:37.733610 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:37.733966 kubelet[2675]: E0904 23:51:37.733681 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ph5zt_kube-system(98a345f7-7f13-4446-8508-3763699201e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ph5zt_kube-system(98a345f7-7f13-4446-8508-3763699201e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ph5zt" podUID="98a345f7-7f13-4446-8508-3763699201e1" Sep 4 23:51:37.743702 containerd[1500]: time="2025-09-04T23:51:37.743629055Z" level=error msg="Failed to destroy network for sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.744510 containerd[1500]: time="2025-09-04T23:51:37.744483357Z" level=error msg="encountered an error cleaning up failed sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.745359 containerd[1500]: time="2025-09-04T23:51:37.745325947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.746580 kubelet[2675]: E0904 23:51:37.746512 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.746684 kubelet[2675]: E0904 23:51:37.746616 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:37.746719 kubelet[2675]: E0904 23:51:37.746692 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:37.747013 kubelet[2675]: E0904 23:51:37.746791 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" podUID="a94fd484-c0f8-40c8-aa5a-0e730f68f3a7" Sep 4 23:51:37.755299 containerd[1500]: time="2025-09-04T23:51:37.755231568Z" level=error msg="Failed to destroy network for sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.755791 containerd[1500]: time="2025-09-04T23:51:37.755762489Z" level=error msg="encountered an error cleaning up failed sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.755863 containerd[1500]: time="2025-09-04T23:51:37.755839956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.756192 kubelet[2675]: E0904 23:51:37.756142 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.756301 kubelet[2675]: E0904 23:51:37.756238 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:51:37.756301 kubelet[2675]: E0904 23:51:37.756268 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:51:37.756377 kubelet[2675]: E0904 23:51:37.756328 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t8wcm_calico-system(22e67ab5-9d3d-4526-b31b-64c19a0aca9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t8wcm_calico-system(22e67ab5-9d3d-4526-b31b-64c19a0aca9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:37.760045 containerd[1500]: time="2025-09-04T23:51:37.758852263Z" level=error msg="Failed to destroy network for sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.760045 containerd[1500]: time="2025-09-04T23:51:37.759902516Z" level=error msg="encountered an error cleaning up failed sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.760154 containerd[1500]: time="2025-09-04T23:51:37.760071755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.760884 kubelet[2675]: E0904 23:51:37.760591 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.760884 kubelet[2675]: E0904 23:51:37.760755 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:37.760884 kubelet[2675]: E0904 23:51:37.760846 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:37.761346 kubelet[2675]: E0904 23:51:37.761019 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-647fc84596-qpbf5_calico-system(a8eddde7-2ee6-48c8-8135-d8f649b5e715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-647fc84596-qpbf5_calico-system(a8eddde7-2ee6-48c8-8135-d8f649b5e715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" podUID="a8eddde7-2ee6-48c8-8135-d8f649b5e715" Sep 4 23:51:37.769153 containerd[1500]: time="2025-09-04T23:51:37.769094799Z" level=error msg="Failed to destroy network for sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.770013 containerd[1500]: time="2025-09-04T23:51:37.769831900Z" level=error msg="encountered an error cleaning up failed sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.770013 containerd[1500]: time="2025-09-04T23:51:37.769898737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.770410 kubelet[2675]: E0904 23:51:37.770341 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.770489 kubelet[2675]: E0904 23:51:37.770426 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:37.770489 kubelet[2675]: E0904 23:51:37.770454 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:37.770569 kubelet[2675]: E0904 23:51:37.770511 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5f7b99c-t4zg6_calico-apiserver(518a1d0b-97e3-468a-b085-a8ed59e3b9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5f7b99c-t4zg6_calico-apiserver(518a1d0b-97e3-468a-b085-a8ed59e3b9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" podUID="518a1d0b-97e3-468a-b085-a8ed59e3b9de" Sep 4 23:51:37.779217 containerd[1500]: time="2025-09-04T23:51:37.779159439Z" level=error msg="Failed to destroy network for sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.779648 containerd[1500]: time="2025-09-04T23:51:37.779604169Z" level=error msg="encountered an error cleaning up failed sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.779751 containerd[1500]: time="2025-09-04T23:51:37.779687406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.779998 kubelet[2675]: E0904 23:51:37.779926 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:37.780134 kubelet[2675]: E0904 23:51:37.780014 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:37.780134 kubelet[2675]: E0904 23:51:37.780049 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:37.780134 kubelet[2675]: E0904 23:51:37.780099 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lr5vb_kube-system(eae99b8b-7303-450f-a486-e66279670f06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lr5vb_kube-system(eae99b8b-7303-450f-a486-e66279670f06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lr5vb" podUID="eae99b8b-7303-450f-a486-e66279670f06" Sep 4 23:51:38.032595 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468-shm.mount: Deactivated successfully. Sep 4 23:51:38.032739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07-shm.mount: Deactivated successfully. Sep 4 23:51:38.032848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c-shm.mount: Deactivated successfully. Sep 4 23:51:38.545175 kubelet[2675]: I0904 23:51:38.545131 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56" Sep 4 23:51:38.546715 containerd[1500]: time="2025-09-04T23:51:38.545924790Z" level=info msg="StopPodSandbox for \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\"" Sep 4 23:51:38.546715 containerd[1500]: time="2025-09-04T23:51:38.546520855Z" level=info msg="Ensure that sandbox 057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56 in task-service has been cleanup successfully" Sep 4 23:51:38.547579 containerd[1500]: time="2025-09-04T23:51:38.547542454Z" level=info msg="TearDown network for sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\" successfully" Sep 4 23:51:38.547579 containerd[1500]: time="2025-09-04T23:51:38.547574273Z" level=info msg="StopPodSandbox for \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\" returns successfully" Sep 4 23:51:38.548697 kubelet[2675]: I0904 23:51:38.548255 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468" Sep 4 23:51:38.548776 containerd[1500]: time="2025-09-04T23:51:38.548467910Z" level=info msg="StopPodSandbox for \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\"" Sep 4 23:51:38.549163 containerd[1500]: time="2025-09-04T23:51:38.548902280Z" level=info msg="TearDown network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" successfully" Sep 4 23:51:38.549163 containerd[1500]: time="2025-09-04T23:51:38.548926175Z" level=info msg="StopPodSandbox for \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" returns successfully" Sep 4 23:51:38.549457 containerd[1500]: time="2025-09-04T23:51:38.549275786Z" level=info msg="StopPodSandbox for \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\"" Sep 4 23:51:38.549591 containerd[1500]: time="2025-09-04T23:51:38.549556946Z" level=info msg="Ensure that sandbox 213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468 in task-service has been cleanup successfully" Sep 4 23:51:38.549752 containerd[1500]: time="2025-09-04T23:51:38.549718691Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\"" Sep 4 23:51:38.549992 containerd[1500]: time="2025-09-04T23:51:38.549956650Z" level=info msg="TearDown network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" successfully" Sep 4 23:51:38.550319 containerd[1500]: time="2025-09-04T23:51:38.550275392Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" returns successfully" Sep 4 23:51:38.550454 containerd[1500]: time="2025-09-04T23:51:38.549793392Z" level=info msg="TearDown network for sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\" successfully" Sep 4 23:51:38.550791 kubelet[2675]: E0904 23:51:38.550768 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:38.551163 containerd[1500]: time="2025-09-04T23:51:38.550869434Z" level=info msg="StopPodSandbox for \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\" returns successfully" Sep 4 23:51:38.550930 systemd[1]: run-netns-cni\x2db13ae006\x2d188c\x2dca90\x2d8e05\x2d0436ea74388d.mount: Deactivated successfully. Sep 4 23:51:38.552534 containerd[1500]: time="2025-09-04T23:51:38.551936638Z" level=info msg="StopPodSandbox for \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\"" Sep 4 23:51:38.552534 containerd[1500]: time="2025-09-04T23:51:38.552081862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:3,}" Sep 4 23:51:38.552534 containerd[1500]: time="2025-09-04T23:51:38.552129923Z" level=info msg="TearDown network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" successfully" Sep 4 23:51:38.552534 containerd[1500]: time="2025-09-04T23:51:38.552146464Z" level=info msg="StopPodSandbox for \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" returns successfully" Sep 4 23:51:38.553710 containerd[1500]: time="2025-09-04T23:51:38.553677444Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\"" Sep 4 23:51:38.553823 containerd[1500]: time="2025-09-04T23:51:38.553790247Z" level=info msg="TearDown network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" successfully" Sep 4 23:51:38.553823 containerd[1500]: time="2025-09-04T23:51:38.553816486Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" returns successfully" Sep 4 23:51:38.554845 kubelet[2675]: E0904 23:51:38.554672 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:38.554845 kubelet[2675]: I0904 23:51:38.554713 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c" Sep 4 23:51:38.555306 containerd[1500]: time="2025-09-04T23:51:38.555278857Z" level=info msg="StopPodSandbox for \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\"" Sep 4 23:51:38.556350 containerd[1500]: time="2025-09-04T23:51:38.555690123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:3,}" Sep 4 23:51:38.556350 containerd[1500]: time="2025-09-04T23:51:38.556153578Z" level=info msg="Ensure that sandbox f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c in task-service has been cleanup successfully" Sep 4 23:51:38.556395 systemd[1]: run-netns-cni\x2d665d9332\x2dd933\x2d8887\x2d912a\x2d0ed6022dc41b.mount: Deactivated successfully. Sep 4 23:51:38.556625 containerd[1500]: time="2025-09-04T23:51:38.556583029Z" level=info msg="TearDown network for sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\" successfully" Sep 4 23:51:38.556625 containerd[1500]: time="2025-09-04T23:51:38.556608637Z" level=info msg="StopPodSandbox for \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\" returns successfully" Sep 4 23:51:38.557140 containerd[1500]: time="2025-09-04T23:51:38.557107088Z" level=info msg="StopPodSandbox for \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\"" Sep 4 23:51:38.557257 containerd[1500]: time="2025-09-04T23:51:38.557232564Z" level=info msg="TearDown network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" successfully" Sep 4 23:51:38.557257 containerd[1500]: time="2025-09-04T23:51:38.557254186Z" level=info msg="StopPodSandbox for \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" returns successfully" Sep 4 23:51:38.558886 containerd[1500]: time="2025-09-04T23:51:38.558780837Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\"" Sep 4 23:51:38.559362 containerd[1500]: time="2025-09-04T23:51:38.559308793Z" level=info msg="TearDown network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" successfully" Sep 4 23:51:38.559683 containerd[1500]: time="2025-09-04T23:51:38.559582501Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" returns successfully" Sep 4 23:51:38.560023 kubelet[2675]: I0904 23:51:38.559974 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1" Sep 4 23:51:38.560582 containerd[1500]: time="2025-09-04T23:51:38.560547913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:3,}" Sep 4 23:51:38.560915 containerd[1500]: time="2025-09-04T23:51:38.560885470Z" level=info msg="StopPodSandbox for \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\"" Sep 4 23:51:38.561131 systemd[1]: run-netns-cni\x2d08a35211\x2ddd6d\x2d0cdf\x2d4ee3\x2d003ac77d34fb.mount: Deactivated successfully. Sep 4 23:51:38.562574 containerd[1500]: time="2025-09-04T23:51:38.561822799Z" level=info msg="Ensure that sandbox 941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1 in task-service has been cleanup successfully" Sep 4 23:51:38.564467 kubelet[2675]: I0904 23:51:38.564426 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07" Sep 4 23:51:38.564892 containerd[1500]: time="2025-09-04T23:51:38.564860373Z" level=info msg="StopPodSandbox for \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\"" Sep 4 23:51:38.565150 containerd[1500]: time="2025-09-04T23:51:38.565123029Z" level=info msg="Ensure that sandbox b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07 in task-service has been cleanup successfully" Sep 4 23:51:38.566394 containerd[1500]: time="2025-09-04T23:51:38.566348883Z" level=info msg="TearDown network for sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\" successfully" Sep 4 23:51:38.566465 containerd[1500]: time="2025-09-04T23:51:38.566437951Z" level=info msg="StopPodSandbox for \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\" returns successfully" Sep 4 23:51:38.568564 systemd[1]: run-netns-cni\x2de59f67f3\x2db3c0\x2d5a07\x2d1a47\x2dd8f734c10073.mount: Deactivated successfully. Sep 4 23:51:38.569983 containerd[1500]: time="2025-09-04T23:51:38.569953777Z" level=info msg="StopPodSandbox for \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\"" Sep 4 23:51:38.570478 containerd[1500]: time="2025-09-04T23:51:38.570427812Z" level=info msg="TearDown network for sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\" successfully" Sep 4 23:51:38.570478 containerd[1500]: time="2025-09-04T23:51:38.570463881Z" level=info msg="StopPodSandbox for \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\" returns successfully" Sep 4 23:51:38.591437 containerd[1500]: time="2025-09-04T23:51:38.591225841Z" level=info msg="TearDown network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" successfully" Sep 4 23:51:38.591437 containerd[1500]: time="2025-09-04T23:51:38.591276327Z" level=info msg="StopPodSandbox for \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" returns successfully" Sep 4 23:51:39.029535 systemd[1]: run-netns-cni\x2dccd99120\x2d875a\x2d66c0\x2dfbc9\x2d946129168a74.mount: Deactivated successfully. Sep 4 23:51:39.478294 containerd[1500]: time="2025-09-04T23:51:39.477848324Z" level=info msg="StopPodSandbox for \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\"" Sep 4 23:51:39.479656 containerd[1500]: time="2025-09-04T23:51:39.479340821Z" level=info msg="TearDown network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" successfully" Sep 4 23:51:39.479656 containerd[1500]: time="2025-09-04T23:51:39.479428556Z" level=info msg="StopPodSandbox for \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" returns successfully" Sep 4 23:51:39.479656 containerd[1500]: time="2025-09-04T23:51:39.479450178Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\"" Sep 4 23:51:39.479656 containerd[1500]: time="2025-09-04T23:51:39.479613075Z" level=info msg="TearDown network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" successfully" Sep 4 23:51:39.479656 containerd[1500]: time="2025-09-04T23:51:39.479667567Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" returns successfully" Sep 4 23:51:39.480574 kubelet[2675]: I0904 23:51:39.480524 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0" Sep 4 23:51:39.480916 containerd[1500]: time="2025-09-04T23:51:39.480841934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:3,}" Sep 4 23:51:39.481237 containerd[1500]: time="2025-09-04T23:51:39.481180493Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\"" Sep 4 23:51:39.481311 containerd[1500]: time="2025-09-04T23:51:39.481291313Z" level=info msg="StopPodSandbox for \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\"" Sep 4 23:51:39.481348 containerd[1500]: time="2025-09-04T23:51:39.481307222Z" level=info msg="TearDown network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" successfully" Sep 4 23:51:39.481348 containerd[1500]: time="2025-09-04T23:51:39.481322391Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" returns successfully" Sep 4 23:51:39.481804 containerd[1500]: time="2025-09-04T23:51:39.481478946Z" level=info msg="Ensure that sandbox e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0 in task-service has been cleanup successfully" Sep 4 23:51:39.481804 containerd[1500]: time="2025-09-04T23:51:39.481727666Z" level=info msg="TearDown network for sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\" successfully" Sep 4 23:51:39.481804 containerd[1500]: time="2025-09-04T23:51:39.481789763Z" level=info msg="StopPodSandbox for \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\" returns successfully" Sep 4 23:51:39.482031 containerd[1500]: time="2025-09-04T23:51:39.482001133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:3,}" Sep 4 23:51:39.482335 containerd[1500]: time="2025-09-04T23:51:39.482302581Z" level=info msg="StopPodSandbox for \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\"" Sep 4 23:51:39.482435 containerd[1500]: time="2025-09-04T23:51:39.482408982Z" level=info msg="TearDown network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" successfully" Sep 4 23:51:39.482435 containerd[1500]: time="2025-09-04T23:51:39.482421556Z" level=info msg="StopPodSandbox for \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" returns successfully" Sep 4 23:51:39.482696 containerd[1500]: time="2025-09-04T23:51:39.482672760Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\"" Sep 4 23:51:39.482778 containerd[1500]: time="2025-09-04T23:51:39.482762319Z" level=info msg="TearDown network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" successfully" Sep 4 23:51:39.482778 containerd[1500]: time="2025-09-04T23:51:39.482775343Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" returns successfully" Sep 4 23:51:39.483566 containerd[1500]: time="2025-09-04T23:51:39.483539055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:3,}" Sep 4 23:51:39.483919 kubelet[2675]: I0904 23:51:39.483894 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1" Sep 4 23:51:39.484720 containerd[1500]: time="2025-09-04T23:51:39.484550544Z" level=info msg="StopPodSandbox for \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\"" Sep 4 23:51:39.484814 containerd[1500]: time="2025-09-04T23:51:39.484781510Z" level=info msg="Ensure that sandbox 766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1 in task-service has been cleanup successfully" Sep 4 23:51:39.484893 systemd[1]: run-netns-cni\x2d32881807\x2d270f\x2d3273\x2d3192\x2d86571d4002b6.mount: Deactivated successfully. Sep 4 23:51:39.485097 containerd[1500]: time="2025-09-04T23:51:39.485077819Z" level=info msg="TearDown network for sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\" successfully" Sep 4 23:51:39.485097 containerd[1500]: time="2025-09-04T23:51:39.485094561Z" level=info msg="StopPodSandbox for \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\" returns successfully" Sep 4 23:51:39.488183 systemd[1]: run-netns-cni\x2d97c6f8e3\x2d1a3c\x2d283a\x2d8f14\x2d0976451ae5a0.mount: Deactivated successfully. Sep 4 23:51:39.488789 containerd[1500]: time="2025-09-04T23:51:39.488604616Z" level=info msg="StopPodSandbox for \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\"" Sep 4 23:51:39.488789 containerd[1500]: time="2025-09-04T23:51:39.488713621Z" level=info msg="TearDown network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" successfully" Sep 4 23:51:39.488789 containerd[1500]: time="2025-09-04T23:51:39.488756773Z" level=info msg="StopPodSandbox for \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" returns successfully" Sep 4 23:51:39.489516 containerd[1500]: time="2025-09-04T23:51:39.489493163Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\"" Sep 4 23:51:39.489590 containerd[1500]: time="2025-09-04T23:51:39.489574316Z" level=info msg="TearDown network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" successfully" Sep 4 23:51:39.489590 containerd[1500]: time="2025-09-04T23:51:39.489586960Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" returns successfully" Sep 4 23:51:39.490954 containerd[1500]: time="2025-09-04T23:51:39.490070042Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\"" Sep 4 23:51:39.490954 containerd[1500]: time="2025-09-04T23:51:39.490190850Z" level=info msg="TearDown network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" successfully" Sep 4 23:51:39.490954 containerd[1500]: time="2025-09-04T23:51:39.490201911Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" returns successfully" Sep 4 23:51:39.490954 containerd[1500]: time="2025-09-04T23:51:39.490664463Z" level=info msg="StopPodSandbox for \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\"" Sep 4 23:51:39.490954 containerd[1500]: time="2025-09-04T23:51:39.490695392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:4,}" Sep 4 23:51:39.490954 containerd[1500]: time="2025-09-04T23:51:39.490841408Z" level=info msg="Ensure that sandbox adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f in task-service has been cleanup successfully" Sep 4 23:51:39.491148 kubelet[2675]: I0904 23:51:39.490154 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f" Sep 4 23:51:39.491177 containerd[1500]: time="2025-09-04T23:51:39.491114132Z" level=info msg="TearDown network for sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\" successfully" Sep 4 23:51:39.491177 containerd[1500]: time="2025-09-04T23:51:39.491128800Z" level=info msg="StopPodSandbox for \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\" returns successfully" Sep 4 23:51:39.492916 containerd[1500]: time="2025-09-04T23:51:39.492298147Z" level=info msg="StopPodSandbox for \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\"" Sep 4 23:51:39.492916 containerd[1500]: time="2025-09-04T23:51:39.492402123Z" level=info msg="TearDown network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" successfully" Sep 4 23:51:39.492916 containerd[1500]: time="2025-09-04T23:51:39.492416069Z" level=info msg="StopPodSandbox for \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" returns successfully" Sep 4 23:51:39.493360 containerd[1500]: time="2025-09-04T23:51:39.493309455Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\"" Sep 4 23:51:39.493411 containerd[1500]: time="2025-09-04T23:51:39.493395297Z" level=info msg="TearDown network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" successfully" Sep 4 23:51:39.493411 containerd[1500]: time="2025-09-04T23:51:39.493405037Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" returns successfully" Sep 4 23:51:39.494021 systemd[1]: run-netns-cni\x2d022ce52a\x2dc221\x2d33e9\x2d483b\x2d665c3cd96217.mount: Deactivated successfully. Sep 4 23:51:39.495158 containerd[1500]: time="2025-09-04T23:51:39.494789008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:3,}" Sep 4 23:51:42.120168 kubelet[2675]: E0904 23:51:42.120091 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:42.530324 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:45352.service - OpenSSH per-connection server daemon (10.0.0.1:45352). Sep 4 23:51:46.119801 kubelet[2675]: E0904 23:51:46.119746 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:46.120304 kubelet[2675]: E0904 23:51:46.119769 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:47.798587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3966790063.mount: Deactivated successfully. Sep 4 23:51:48.391765 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 45352 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:51:48.332007 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:48.338827 systemd-logind[1485]: New session 9 of user core. Sep 4 23:51:48.350362 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:51:48.760246 sshd[4353]: Connection closed by 10.0.0.1 port 45352 Sep 4 23:51:48.760599 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:48.765221 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:45352.service: Deactivated successfully. Sep 4 23:51:48.767646 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:51:48.768473 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:51:48.769786 systemd-logind[1485]: Removed session 9. Sep 4 23:51:53.083404 containerd[1500]: time="2025-09-04T23:51:53.081843559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:53.215297 containerd[1500]: time="2025-09-04T23:51:53.215212624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 4 23:51:53.288429 containerd[1500]: time="2025-09-04T23:51:53.288340835Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:53.338870 containerd[1500]: time="2025-09-04T23:51:53.338633944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:51:53.341136 containerd[1500]: time="2025-09-04T23:51:53.340693346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 30.01988223s" Sep 4 23:51:53.341136 containerd[1500]: time="2025-09-04T23:51:53.340763599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 4 23:51:53.364101 containerd[1500]: time="2025-09-04T23:51:53.363746990Z" level=info msg="CreateContainer within sandbox \"acc1c54dc5b19887225796e92abd1fbf55ebfb7ba1de23c6dace20f3dd56c5af\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 23:51:53.479804 containerd[1500]: time="2025-09-04T23:51:53.479724646Z" level=info msg="CreateContainer within sandbox \"acc1c54dc5b19887225796e92abd1fbf55ebfb7ba1de23c6dace20f3dd56c5af\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cd81e737701f299f127ed044ddf5b84b36b6498c1933f9f2c67fd715e232af5b\"" Sep 4 23:51:53.488810 containerd[1500]: time="2025-09-04T23:51:53.488340961Z" level=info msg="StartContainer for \"cd81e737701f299f127ed044ddf5b84b36b6498c1933f9f2c67fd715e232af5b\"" Sep 4 23:51:53.500270 containerd[1500]: time="2025-09-04T23:51:53.500196763Z" level=error msg="Failed to destroy network for sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.501118 containerd[1500]: time="2025-09-04T23:51:53.501083995Z" level=error msg="encountered an error cleaning up failed sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.501295 containerd[1500]: time="2025-09-04T23:51:53.501264686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.501800 kubelet[2675]: E0904 23:51:53.501731 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.502444 kubelet[2675]: E0904 23:51:53.501833 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:53.502444 kubelet[2675]: E0904 23:51:53.501858 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lr5vb" Sep 4 23:51:53.502444 kubelet[2675]: E0904 23:51:53.501919 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lr5vb_kube-system(eae99b8b-7303-450f-a486-e66279670f06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lr5vb_kube-system(eae99b8b-7303-450f-a486-e66279670f06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lr5vb" podUID="eae99b8b-7303-450f-a486-e66279670f06" Sep 4 23:51:53.525852 containerd[1500]: time="2025-09-04T23:51:53.525225781Z" level=error msg="Failed to destroy network for sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.526360 containerd[1500]: time="2025-09-04T23:51:53.526313722Z" level=error msg="encountered an error cleaning up failed sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.526456 containerd[1500]: time="2025-09-04T23:51:53.526417338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.528811 kubelet[2675]: E0904 23:51:53.526752 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.528811 kubelet[2675]: E0904 23:51:53.526856 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:53.528811 kubelet[2675]: E0904 23:51:53.526889 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-w4bmm" Sep 4 23:51:53.529017 kubelet[2675]: E0904 23:51:53.526949 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-w4bmm_calico-system(4dda5f32-9863-42ac-8d05-03239bb6d11e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-w4bmm_calico-system(4dda5f32-9863-42ac-8d05-03239bb6d11e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-w4bmm" podUID="4dda5f32-9863-42ac-8d05-03239bb6d11e" Sep 4 23:51:53.541872 containerd[1500]: time="2025-09-04T23:51:53.537839552Z" level=error msg="Failed to destroy network for sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.543268 containerd[1500]: time="2025-09-04T23:51:53.542516069Z" level=error msg="encountered an error cleaning up failed sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.543268 containerd[1500]: time="2025-09-04T23:51:53.542636095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.543403 kubelet[2675]: E0904 23:51:53.542931 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.543403 kubelet[2675]: E0904 23:51:53.543016 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:53.543403 kubelet[2675]: E0904 23:51:53.543058 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ph5zt" Sep 4 23:51:53.543549 kubelet[2675]: E0904 23:51:53.543132 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ph5zt_kube-system(98a345f7-7f13-4446-8508-3763699201e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ph5zt_kube-system(98a345f7-7f13-4446-8508-3763699201e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ph5zt" podUID="98a345f7-7f13-4446-8508-3763699201e1" Sep 4 23:51:53.567330 containerd[1500]: time="2025-09-04T23:51:53.561721157Z" level=error msg="Failed to destroy network for sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.573313 containerd[1500]: time="2025-09-04T23:51:53.571051267Z" level=error msg="Failed to destroy network for sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.575798 containerd[1500]: time="2025-09-04T23:51:53.574062374Z" level=error msg="encountered an error cleaning up failed sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.575798 containerd[1500]: time="2025-09-04T23:51:53.574153746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.575993 kubelet[2675]: E0904 23:51:53.575485 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.575993 kubelet[2675]: E0904 23:51:53.575578 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:53.575993 kubelet[2675]: E0904 23:51:53.575605 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" Sep 4 23:51:53.576155 kubelet[2675]: E0904 23:51:53.575661 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-647fc84596-qpbf5_calico-system(a8eddde7-2ee6-48c8-8135-d8f649b5e715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-647fc84596-qpbf5_calico-system(a8eddde7-2ee6-48c8-8135-d8f649b5e715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" podUID="a8eddde7-2ee6-48c8-8135-d8f649b5e715" Sep 4 23:51:53.587899 containerd[1500]: time="2025-09-04T23:51:53.587824340Z" level=error msg="encountered an error cleaning up failed sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.588437 containerd[1500]: time="2025-09-04T23:51:53.587947983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.590129 kubelet[2675]: E0904 23:51:53.589470 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.590129 kubelet[2675]: E0904 23:51:53.589559 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:53.590129 kubelet[2675]: E0904 23:51:53.589586 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d55db56d-ts945" Sep 4 23:51:53.590305 kubelet[2675]: E0904 23:51:53.589637 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d55db56d-ts945_calico-system(210f206c-d2e9-4f37-8e9c-bb39c462e3a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d55db56d-ts945_calico-system(210f206c-d2e9-4f37-8e9c-bb39c462e3a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d55db56d-ts945" podUID="210f206c-d2e9-4f37-8e9c-bb39c462e3a5" Sep 4 23:51:53.598444 containerd[1500]: time="2025-09-04T23:51:53.598361736Z" level=error msg="Failed to destroy network for sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.602767 containerd[1500]: time="2025-09-04T23:51:53.601047119Z" level=error msg="Failed to destroy network for sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.602767 containerd[1500]: time="2025-09-04T23:51:53.601694760Z" level=error msg="encountered an error cleaning up failed sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.602767 containerd[1500]: time="2025-09-04T23:51:53.601787144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.602767 containerd[1500]: time="2025-09-04T23:51:53.602181758Z" level=error msg="encountered an error cleaning up failed sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.602767 containerd[1500]: time="2025-09-04T23:51:53.602231843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.603123 kubelet[2675]: E0904 23:51:53.602536 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.603123 kubelet[2675]: E0904 23:51:53.602620 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:53.603123 kubelet[2675]: E0904 23:51:53.602661 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" Sep 4 23:51:53.603261 kubelet[2675]: E0904 23:51:53.602725 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5f7b99c-lbsxp_calico-apiserver(a94fd484-c0f8-40c8-aa5a-0e730f68f3a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" podUID="a94fd484-c0f8-40c8-aa5a-0e730f68f3a7" Sep 4 23:51:53.603261 kubelet[2675]: E0904 23:51:53.602819 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.603261 kubelet[2675]: E0904 23:51:53.602847 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:51:53.603407 kubelet[2675]: E0904 23:51:53.602867 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8wcm" Sep 4 23:51:53.603407 kubelet[2675]: E0904 23:51:53.602908 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t8wcm_calico-system(22e67ab5-9d3d-4526-b31b-64c19a0aca9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t8wcm_calico-system(22e67ab5-9d3d-4526-b31b-64c19a0aca9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8wcm" podUID="22e67ab5-9d3d-4526-b31b-64c19a0aca9b" Sep 4 23:51:53.606750 containerd[1500]: time="2025-09-04T23:51:53.606567757Z" level=error msg="Failed to destroy network for sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.609066 containerd[1500]: time="2025-09-04T23:51:53.608903611Z" level=error msg="encountered an error cleaning up failed sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.609066 containerd[1500]: time="2025-09-04T23:51:53.608986307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.609515 kubelet[2675]: E0904 23:51:53.609340 2675 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 23:51:53.609515 kubelet[2675]: E0904 23:51:53.609398 2675 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:53.609515 kubelet[2675]: E0904 23:51:53.609423 2675 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" Sep 4 23:51:53.609648 kubelet[2675]: E0904 23:51:53.609467 2675 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f5f7b99c-t4zg6_calico-apiserver(518a1d0b-97e3-468a-b085-a8ed59e3b9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f5f7b99c-t4zg6_calico-apiserver(518a1d0b-97e3-468a-b085-a8ed59e3b9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" podUID="518a1d0b-97e3-468a-b085-a8ed59e3b9de" Sep 4 23:51:53.740823 systemd[1]: Started cri-containerd-cd81e737701f299f127ed044ddf5b84b36b6498c1933f9f2c67fd715e232af5b.scope - libcontainer container cd81e737701f299f127ed044ddf5b84b36b6498c1933f9f2c67fd715e232af5b. Sep 4 23:51:53.788459 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:43168.service - OpenSSH per-connection server daemon (10.0.0.1:43168). Sep 4 23:51:53.873548 containerd[1500]: time="2025-09-04T23:51:53.868791736Z" level=info msg="StartContainer for \"cd81e737701f299f127ed044ddf5b84b36b6498c1933f9f2c67fd715e232af5b\" returns successfully" Sep 4 23:51:54.004462 sshd[4647]: Accepted publickey for core from 10.0.0.1 port 43168 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:51:54.007577 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:54.021293 systemd-logind[1485]: New session 10 of user core. Sep 4 23:51:54.031445 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:51:54.204686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da-shm.mount: Deactivated successfully. Sep 4 23:51:54.210061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb-shm.mount: Deactivated successfully. Sep 4 23:51:54.210269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123-shm.mount: Deactivated successfully. Sep 4 23:51:54.210487 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94-shm.mount: Deactivated successfully. Sep 4 23:51:54.225603 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 23:51:54.226798 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 23:51:54.340274 kubelet[2675]: I0904 23:51:54.340173 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94" Sep 4 23:51:54.344812 containerd[1500]: time="2025-09-04T23:51:54.343274061Z" level=info msg="StopPodSandbox for \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\"" Sep 4 23:51:54.344812 containerd[1500]: time="2025-09-04T23:51:54.343862751Z" level=info msg="Ensure that sandbox 1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94 in task-service has been cleanup successfully" Sep 4 23:51:54.344812 containerd[1500]: time="2025-09-04T23:51:54.344567660Z" level=info msg="TearDown network for sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\" successfully" Sep 4 23:51:54.344812 containerd[1500]: time="2025-09-04T23:51:54.344585675Z" level=info msg="StopPodSandbox for \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\" returns successfully" Sep 4 23:51:54.346511 containerd[1500]: time="2025-09-04T23:51:54.346058070Z" level=info msg="StopPodSandbox for \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\"" Sep 4 23:51:54.346511 containerd[1500]: time="2025-09-04T23:51:54.346164431Z" level=info msg="TearDown network for sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\" successfully" Sep 4 23:51:54.346511 containerd[1500]: time="2025-09-04T23:51:54.346177565Z" level=info msg="StopPodSandbox for \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\" returns successfully" Sep 4 23:51:54.347321 containerd[1500]: time="2025-09-04T23:51:54.347248424Z" level=info msg="StopPodSandbox for \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\"" Sep 4 23:51:54.347508 containerd[1500]: time="2025-09-04T23:51:54.347486573Z" level=info msg="TearDown network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" successfully" Sep 4 23:51:54.347614 containerd[1500]: time="2025-09-04T23:51:54.347573046Z" level=info msg="StopPodSandbox for \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" returns successfully" Sep 4 23:51:54.348199 containerd[1500]: time="2025-09-04T23:51:54.348021341Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\"" Sep 4 23:51:54.348199 containerd[1500]: time="2025-09-04T23:51:54.348147799Z" level=info msg="TearDown network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" successfully" Sep 4 23:51:54.348199 containerd[1500]: time="2025-09-04T23:51:54.348161565Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" returns successfully" Sep 4 23:51:54.349127 kubelet[2675]: E0904 23:51:54.348551 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:54.349350 systemd[1]: run-netns-cni\x2d56e2d0e8\x2d07af\x2dd625\x2d62f6\x2df05d868c4b8c.mount: Deactivated successfully. Sep 4 23:51:54.350576 containerd[1500]: time="2025-09-04T23:51:54.349346148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:4,}" Sep 4 23:51:54.352183 kubelet[2675]: I0904 23:51:54.351905 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123" Sep 4 23:51:54.356783 containerd[1500]: time="2025-09-04T23:51:54.356692668Z" level=info msg="StopPodSandbox for \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\"" Sep 4 23:51:54.359827 containerd[1500]: time="2025-09-04T23:51:54.357586193Z" level=info msg="Ensure that sandbox f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123 in task-service has been cleanup successfully" Sep 4 23:51:54.363755 systemd[1]: run-netns-cni\x2dddcbaed7\x2d67a7\x2d2415\x2dfa1e\x2d8f70b7c103c3.mount: Deactivated successfully. Sep 4 23:51:54.368769 containerd[1500]: time="2025-09-04T23:51:54.368708841Z" level=info msg="TearDown network for sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\" successfully" Sep 4 23:51:54.368769 containerd[1500]: time="2025-09-04T23:51:54.368762302Z" level=info msg="StopPodSandbox for \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\" returns successfully" Sep 4 23:51:54.371311 containerd[1500]: time="2025-09-04T23:51:54.371240995Z" level=info msg="StopPodSandbox for \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\"" Sep 4 23:51:54.371562 containerd[1500]: time="2025-09-04T23:51:54.371483482Z" level=info msg="TearDown network for sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\" successfully" Sep 4 23:51:54.371562 containerd[1500]: time="2025-09-04T23:51:54.371505514Z" level=info msg="StopPodSandbox for \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\" returns successfully" Sep 4 23:51:54.373536 containerd[1500]: time="2025-09-04T23:51:54.372019823Z" level=info msg="StopPodSandbox for \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\"" Sep 4 23:51:54.373536 containerd[1500]: time="2025-09-04T23:51:54.372212216Z" level=info msg="TearDown network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" successfully" Sep 4 23:51:54.373536 containerd[1500]: time="2025-09-04T23:51:54.372227244Z" level=info msg="StopPodSandbox for \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" returns successfully" Sep 4 23:51:54.378498 containerd[1500]: time="2025-09-04T23:51:54.378446840Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\"" Sep 4 23:51:54.378657 containerd[1500]: time="2025-09-04T23:51:54.378600349Z" level=info msg="TearDown network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" successfully" Sep 4 23:51:54.378657 containerd[1500]: time="2025-09-04T23:51:54.378616420Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" returns successfully" Sep 4 23:51:54.379148 kubelet[2675]: E0904 23:51:54.379116 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:54.380940 kubelet[2675]: I0904 23:51:54.380899 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb" Sep 4 23:51:54.380991 containerd[1500]: time="2025-09-04T23:51:54.380194334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:4,}" Sep 4 23:51:54.381521 containerd[1500]: time="2025-09-04T23:51:54.381490147Z" level=info msg="StopPodSandbox for \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\"" Sep 4 23:51:54.382425 containerd[1500]: time="2025-09-04T23:51:54.382392067Z" level=info msg="Ensure that sandbox 8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb in task-service has been cleanup successfully" Sep 4 23:51:54.383075 containerd[1500]: time="2025-09-04T23:51:54.382651456Z" level=info msg="TearDown network for sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\" successfully" Sep 4 23:51:54.383075 containerd[1500]: time="2025-09-04T23:51:54.382673417Z" level=info msg="StopPodSandbox for \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\" returns successfully" Sep 4 23:51:54.385687 containerd[1500]: time="2025-09-04T23:51:54.385661160Z" level=info msg="StopPodSandbox for \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\"" Sep 4 23:51:54.386285 containerd[1500]: time="2025-09-04T23:51:54.386016590Z" level=info msg="TearDown network for sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\" successfully" Sep 4 23:51:54.386285 containerd[1500]: time="2025-09-04T23:51:54.386094557Z" level=info msg="StopPodSandbox for \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\" returns successfully" Sep 4 23:51:54.386910 containerd[1500]: time="2025-09-04T23:51:54.386680381Z" level=info msg="StopPodSandbox for \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\"" Sep 4 23:51:54.386910 containerd[1500]: time="2025-09-04T23:51:54.386797562Z" level=info msg="TearDown network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" successfully" Sep 4 23:51:54.386910 containerd[1500]: time="2025-09-04T23:51:54.386840784Z" level=info msg="StopPodSandbox for \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" returns successfully" Sep 4 23:51:54.386813 systemd[1]: run-netns-cni\x2d24a4434b\x2d4cbe\x2dc28b\x2d7d35\x2d23ab3f95c5d6.mount: Deactivated successfully. Sep 4 23:51:54.387842 kubelet[2675]: I0904 23:51:54.387372 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf" Sep 4 23:51:54.388088 containerd[1500]: time="2025-09-04T23:51:54.388058920Z" level=info msg="StopPodSandbox for \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\"" Sep 4 23:51:54.388349 containerd[1500]: time="2025-09-04T23:51:54.388317538Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\"" Sep 4 23:51:54.388444 containerd[1500]: time="2025-09-04T23:51:54.388424660Z" level=info msg="TearDown network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" successfully" Sep 4 23:51:54.388738 containerd[1500]: time="2025-09-04T23:51:54.388441551Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" returns successfully" Sep 4 23:51:54.389664 containerd[1500]: time="2025-09-04T23:51:54.389337310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:4,}" Sep 4 23:51:54.390129 containerd[1500]: time="2025-09-04T23:51:54.389950597Z" level=info msg="Ensure that sandbox 771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf in task-service has been cleanup successfully" Sep 4 23:51:54.390485 containerd[1500]: time="2025-09-04T23:51:54.390454647Z" level=info msg="TearDown network for sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\" successfully" Sep 4 23:51:54.390637 containerd[1500]: time="2025-09-04T23:51:54.390568431Z" level=info msg="StopPodSandbox for \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\" returns successfully" Sep 4 23:51:54.392584 containerd[1500]: time="2025-09-04T23:51:54.392385237Z" level=info msg="StopPodSandbox for \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\"" Sep 4 23:51:54.392584 containerd[1500]: time="2025-09-04T23:51:54.392509531Z" level=info msg="TearDown network for sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\" successfully" Sep 4 23:51:54.392584 containerd[1500]: time="2025-09-04T23:51:54.392527244Z" level=info msg="StopPodSandbox for \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\" returns successfully" Sep 4 23:51:54.395950 systemd[1]: run-netns-cni\x2dc0a93e75\x2dbd3d\x2df246\x2d5830\x2dd002d7450015.mount: Deactivated successfully. Sep 4 23:51:54.397762 containerd[1500]: time="2025-09-04T23:51:54.396451422Z" level=info msg="StopPodSandbox for \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\"" Sep 4 23:51:54.397762 containerd[1500]: time="2025-09-04T23:51:54.396578803Z" level=info msg="TearDown network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" successfully" Sep 4 23:51:54.397762 containerd[1500]: time="2025-09-04T23:51:54.396593480Z" level=info msg="StopPodSandbox for \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" returns successfully" Sep 4 23:51:54.400946 containerd[1500]: time="2025-09-04T23:51:54.400279428Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\"" Sep 4 23:51:54.400946 containerd[1500]: time="2025-09-04T23:51:54.400440903Z" level=info msg="TearDown network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" successfully" Sep 4 23:51:54.400946 containerd[1500]: time="2025-09-04T23:51:54.400457344Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" returns successfully" Sep 4 23:51:54.402007 containerd[1500]: time="2025-09-04T23:51:54.401520178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:4,}" Sep 4 23:51:54.408892 sshd[4669]: Connection closed by 10.0.0.1 port 43168 Sep 4 23:51:54.409490 sshd-session[4647]: pam_unix(sshd:session): session closed for user core Sep 4 23:51:54.413772 kubelet[2675]: I0904 23:51:54.410503 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add" Sep 4 23:51:54.415891 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:43168.service: Deactivated successfully. Sep 4 23:51:54.417316 containerd[1500]: time="2025-09-04T23:51:54.416977378Z" level=info msg="StopPodSandbox for \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\"" Sep 4 23:51:54.417316 containerd[1500]: time="2025-09-04T23:51:54.417238811Z" level=info msg="Ensure that sandbox ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add in task-service has been cleanup successfully" Sep 4 23:51:54.419188 containerd[1500]: time="2025-09-04T23:51:54.418281255Z" level=info msg="TearDown network for sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\" successfully" Sep 4 23:51:54.419188 containerd[1500]: time="2025-09-04T23:51:54.418304209Z" level=info msg="StopPodSandbox for \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\" returns successfully" Sep 4 23:51:54.419188 containerd[1500]: time="2025-09-04T23:51:54.419112262Z" level=info msg="StopPodSandbox for \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\"" Sep 4 23:51:54.419307 containerd[1500]: time="2025-09-04T23:51:54.419285850Z" level=info msg="TearDown network for sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\" successfully" Sep 4 23:51:54.419346 containerd[1500]: time="2025-09-04T23:51:54.419304856Z" level=info msg="StopPodSandbox for \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\" returns successfully" Sep 4 23:51:54.421559 containerd[1500]: time="2025-09-04T23:51:54.421219896Z" level=info msg="StopPodSandbox for \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\"" Sep 4 23:51:54.421559 containerd[1500]: time="2025-09-04T23:51:54.421333370Z" level=info msg="TearDown network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" successfully" Sep 4 23:51:54.421559 containerd[1500]: time="2025-09-04T23:51:54.421346725Z" level=info msg="StopPodSandbox for \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" returns successfully" Sep 4 23:51:54.424440 containerd[1500]: time="2025-09-04T23:51:54.423311289Z" level=info msg="StopPodSandbox for \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\"" Sep 4 23:51:54.424440 containerd[1500]: time="2025-09-04T23:51:54.423794239Z" level=info msg="Ensure that sandbox b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87 in task-service has been cleanup successfully" Sep 4 23:51:54.424440 containerd[1500]: time="2025-09-04T23:51:54.424151172Z" level=info msg="TearDown network for sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\" successfully" Sep 4 23:51:54.424440 containerd[1500]: time="2025-09-04T23:51:54.424169927Z" level=info msg="StopPodSandbox for \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\" returns successfully" Sep 4 23:51:54.424440 containerd[1500]: time="2025-09-04T23:51:54.424270257Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\"" Sep 4 23:51:54.424440 containerd[1500]: time="2025-09-04T23:51:54.424363613Z" level=info msg="TearDown network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" successfully" Sep 4 23:51:54.424440 containerd[1500]: time="2025-09-04T23:51:54.424376236Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" returns successfully" Sep 4 23:51:54.424746 kubelet[2675]: I0904 23:51:54.424213 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87" Sep 4 23:51:54.423963 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:51:54.427463 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:51:54.431230 containerd[1500]: time="2025-09-04T23:51:54.431188509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:4,}" Sep 4 23:51:54.431971 containerd[1500]: time="2025-09-04T23:51:54.431944664Z" level=info msg="StopPodSandbox for \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\"" Sep 4 23:51:54.432527 containerd[1500]: time="2025-09-04T23:51:54.432441601Z" level=info msg="TearDown network for sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\" successfully" Sep 4 23:51:54.432527 containerd[1500]: time="2025-09-04T23:51:54.432462511Z" level=info msg="StopPodSandbox for \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\" returns successfully" Sep 4 23:51:54.436430 systemd-logind[1485]: Removed session 10. Sep 4 23:51:54.443335 containerd[1500]: time="2025-09-04T23:51:54.442852538Z" level=info msg="StopPodSandbox for \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\"" Sep 4 23:51:54.443335 containerd[1500]: time="2025-09-04T23:51:54.443020224Z" level=info msg="TearDown network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" successfully" Sep 4 23:51:54.443335 containerd[1500]: time="2025-09-04T23:51:54.443060239Z" level=info msg="StopPodSandbox for \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" returns successfully" Sep 4 23:51:54.445840 containerd[1500]: time="2025-09-04T23:51:54.445604636Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\"" Sep 4 23:51:54.445840 containerd[1500]: time="2025-09-04T23:51:54.445754168Z" level=info msg="TearDown network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" successfully" Sep 4 23:51:54.445840 containerd[1500]: time="2025-09-04T23:51:54.445770289Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" returns successfully" Sep 4 23:51:54.447734 containerd[1500]: time="2025-09-04T23:51:54.447684437Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\"" Sep 4 23:51:54.447875 containerd[1500]: time="2025-09-04T23:51:54.447847394Z" level=info msg="TearDown network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" successfully" Sep 4 23:51:54.447875 containerd[1500]: time="2025-09-04T23:51:54.447871760Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" returns successfully" Sep 4 23:51:54.451354 containerd[1500]: time="2025-09-04T23:51:54.451282690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:5,}" Sep 4 23:51:54.452758 kubelet[2675]: I0904 23:51:54.452682 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4" Sep 4 23:51:54.454227 containerd[1500]: time="2025-09-04T23:51:54.454168732Z" level=info msg="StopPodSandbox for \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\"" Sep 4 23:51:54.454697 containerd[1500]: time="2025-09-04T23:51:54.454489767Z" level=info msg="Ensure that sandbox d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4 in task-service has been cleanup successfully" Sep 4 23:51:54.455623 containerd[1500]: time="2025-09-04T23:51:54.455534927Z" level=info msg="TearDown network for sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\" successfully" Sep 4 23:51:54.455623 containerd[1500]: time="2025-09-04T23:51:54.455556307Z" level=info msg="StopPodSandbox for \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\" returns successfully" Sep 4 23:51:54.456695 containerd[1500]: time="2025-09-04T23:51:54.456506669Z" level=info msg="StopPodSandbox for \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\"" Sep 4 23:51:54.456695 containerd[1500]: time="2025-09-04T23:51:54.456643457Z" level=info msg="TearDown network for sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\" successfully" Sep 4 23:51:54.456695 containerd[1500]: time="2025-09-04T23:51:54.456659827Z" level=info msg="StopPodSandbox for \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\" returns successfully" Sep 4 23:51:54.458316 containerd[1500]: time="2025-09-04T23:51:54.457465746Z" level=info msg="StopPodSandbox for \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\"" Sep 4 23:51:54.458316 containerd[1500]: time="2025-09-04T23:51:54.457574793Z" level=info msg="TearDown network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" successfully" Sep 4 23:51:54.458316 containerd[1500]: time="2025-09-04T23:51:54.457588258Z" level=info msg="StopPodSandbox for \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" returns successfully" Sep 4 23:51:54.458478 kubelet[2675]: I0904 23:51:54.457831 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da" Sep 4 23:51:54.458533 containerd[1500]: time="2025-09-04T23:51:54.458411640Z" level=info msg="StopPodSandbox for \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\"" Sep 4 23:51:54.458653 containerd[1500]: time="2025-09-04T23:51:54.458631043Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\"" Sep 4 23:51:54.458820 containerd[1500]: time="2025-09-04T23:51:54.458800273Z" level=info msg="TearDown network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" successfully" Sep 4 23:51:54.458911 containerd[1500]: time="2025-09-04T23:51:54.458892857Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" returns successfully" Sep 4 23:51:54.459263 containerd[1500]: time="2025-09-04T23:51:54.458988247Z" level=info msg="Ensure that sandbox f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da in task-service has been cleanup successfully" Sep 4 23:51:54.459632 containerd[1500]: time="2025-09-04T23:51:54.459549806Z" level=info msg="TearDown network for sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\" successfully" Sep 4 23:51:54.460146 containerd[1500]: time="2025-09-04T23:51:54.460100594Z" level=info msg="StopPodSandbox for \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\" returns successfully" Sep 4 23:51:54.460623 containerd[1500]: time="2025-09-04T23:51:54.459988723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:4,}" Sep 4 23:51:54.461615 containerd[1500]: time="2025-09-04T23:51:54.460551664Z" level=info msg="StopPodSandbox for \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\"" Sep 4 23:51:54.461713 containerd[1500]: time="2025-09-04T23:51:54.461688598Z" level=info msg="TearDown network for sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\" successfully" Sep 4 23:51:54.461764 containerd[1500]: time="2025-09-04T23:51:54.461710730Z" level=info msg="StopPodSandbox for \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\" returns successfully" Sep 4 23:51:54.463388 containerd[1500]: time="2025-09-04T23:51:54.462078303Z" level=info msg="StopPodSandbox for \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\"" Sep 4 23:51:54.463388 containerd[1500]: time="2025-09-04T23:51:54.462212375Z" level=info msg="TearDown network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" successfully" Sep 4 23:51:54.463388 containerd[1500]: time="2025-09-04T23:51:54.462224849Z" level=info msg="StopPodSandbox for \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" returns successfully" Sep 4 23:51:54.463388 containerd[1500]: time="2025-09-04T23:51:54.462598263Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\"" Sep 4 23:51:54.463388 containerd[1500]: time="2025-09-04T23:51:54.462692711Z" level=info msg="TearDown network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" successfully" Sep 4 23:51:54.463388 containerd[1500]: time="2025-09-04T23:51:54.462704162Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" returns successfully" Sep 4 23:51:54.463388 containerd[1500]: time="2025-09-04T23:51:54.463203343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:4,}" Sep 4 23:51:54.621234 kubelet[2675]: I0904 23:51:54.621090 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wsj4t" podStartSLOduration=3.6947614509999998 podStartE2EDuration="56.621063323s" podCreationTimestamp="2025-09-04 23:50:58 +0000 UTC" firstStartedPulling="2025-09-04 23:51:00.422909684 +0000 UTC m=+50.400829994" lastFinishedPulling="2025-09-04 23:51:53.349211566 +0000 UTC m=+103.327131866" observedRunningTime="2025-09-04 23:51:54.441239306 +0000 UTC m=+104.419159636" watchObservedRunningTime="2025-09-04 23:51:54.621063323 +0000 UTC m=+104.598983643" Sep 4 23:51:55.194986 systemd[1]: run-netns-cni\x2da991d81c\x2d2977\x2d79b0\x2d1b96\x2df02707e253ae.mount: Deactivated successfully. Sep 4 23:51:55.195147 systemd[1]: run-netns-cni\x2d7ccc1ef7\x2d40a1\x2d0016\x2deded\x2da3201021cd47.mount: Deactivated successfully. Sep 4 23:51:55.195263 systemd[1]: run-netns-cni\x2dd8e7a877\x2d758a\x2d947d\x2d5d9f\x2d1fd8604fa0a4.mount: Deactivated successfully. Sep 4 23:51:55.195373 systemd[1]: run-netns-cni\x2d2afe5b86\x2db9d8\x2dd07c\x2df893\x2d5c05b9ec4e03.mount: Deactivated successfully. Sep 4 23:51:56.511806 systemd-networkd[1431]: cali5e4cebb7de1: Link UP Sep 4 23:51:56.512853 systemd-networkd[1431]: cali5e4cebb7de1: Gained carrier Sep 4 23:51:56.912650 systemd-networkd[1431]: calid313b2eb122: Link UP Sep 4 23:51:56.918247 systemd-networkd[1431]: calid313b2eb122: Gained carrier Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:54.593 [INFO][4703] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:54.730 [INFO][4703] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0 coredns-668d6bf9bc- kube-system eae99b8b-7303-450f-a486-e66279670f06 949 0 2025-09-04 23:50:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lr5vb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5e4cebb7de1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Namespace="kube-system" Pod="coredns-668d6bf9bc-lr5vb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lr5vb-" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:54.730 [INFO][4703] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Namespace="kube-system" Pod="coredns-668d6bf9bc-lr5vb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" HandleID="k8s-pod-network.1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Workload="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:55.976 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" HandleID="k8s-pod-network.1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Workload="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lr5vb", "timestamp":"2025-09-04 23:51:55.975916148 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:55.976 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:55.981 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:55.982 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:55.990 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" host="localhost" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:55.997 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.001 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.003 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.005 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.006 [INFO][4733] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" host="localhost" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.009 [INFO][4733] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39 Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.073 [INFO][4733] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" host="localhost" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.284 [INFO][4733] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" host="localhost" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.285 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" host="localhost" Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.285 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:51:56.937696 containerd[1500]: 2025-09-04 23:51:56.285 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" HandleID="k8s-pod-network.1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Workload="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" Sep 4 23:51:56.942399 containerd[1500]: 2025-09-04 23:51:56.295 [INFO][4703] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Namespace="kube-system" Pod="coredns-668d6bf9bc-lr5vb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eae99b8b-7303-450f-a486-e66279670f06", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lr5vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4cebb7de1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:51:56.942399 containerd[1500]: 2025-09-04 23:51:56.295 [INFO][4703] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Namespace="kube-system" Pod="coredns-668d6bf9bc-lr5vb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" Sep 4 23:51:56.942399 containerd[1500]: 2025-09-04 23:51:56.295 [INFO][4703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e4cebb7de1 ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Namespace="kube-system" Pod="coredns-668d6bf9bc-lr5vb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" Sep 4 23:51:56.942399 containerd[1500]: 2025-09-04 23:51:56.513 [INFO][4703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Namespace="kube-system" Pod="coredns-668d6bf9bc-lr5vb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" Sep 4 23:51:56.942399 containerd[1500]: 2025-09-04 23:51:56.514 [INFO][4703] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Namespace="kube-system" Pod="coredns-668d6bf9bc-lr5vb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eae99b8b-7303-450f-a486-e66279670f06", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39", Pod:"coredns-668d6bf9bc-lr5vb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4cebb7de1", MAC:"76:09:9a:5c:c0:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:51:56.942399 containerd[1500]: 2025-09-04 23:51:56.918 [INFO][4703] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39" Namespace="kube-system" Pod="coredns-668d6bf9bc-lr5vb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lr5vb-eth0" Sep 4 23:51:57.084261 containerd[1500]: time="2025-09-04T23:51:57.084102480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:51:57.084261 containerd[1500]: time="2025-09-04T23:51:57.084193110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:51:57.084261 containerd[1500]: time="2025-09-04T23:51:57.084207788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:57.084554 containerd[1500]: time="2025-09-04T23:51:57.084319038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:57.128432 systemd[1]: Started cri-containerd-1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39.scope - libcontainer container 1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39. Sep 4 23:51:57.144780 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:55.498 [INFO][4776] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:55.814 [INFO][4776] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--w4bmm-eth0 goldmane-54d579b49d- calico-system 4dda5f32-9863-42ac-8d05-03239bb6d11e 953 0 2025-09-04 23:50:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-w4bmm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid313b2eb122 [] [] }} ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Namespace="calico-system" Pod="goldmane-54d579b49d-w4bmm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--w4bmm-" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:55.814 [INFO][4776] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Namespace="calico-system" Pod="goldmane-54d579b49d-w4bmm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4828] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" HandleID="k8s-pod-network.c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Workload="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4828] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" HandleID="k8s-pod-network.c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Workload="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-w4bmm", "timestamp":"2025-09-04 23:51:55.975000781 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.286 [INFO][4828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.286 [INFO][4828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.302 [INFO][4828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" host="localhost" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.316 [INFO][4828] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.322 [INFO][4828] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.325 [INFO][4828] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.330 [INFO][4828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.330 [INFO][4828] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" host="localhost" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.331 [INFO][4828] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.354 [INFO][4828] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" host="localhost" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.903 [INFO][4828] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" host="localhost" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.903 [INFO][4828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" host="localhost" Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.903 [INFO][4828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:51:57.175531 containerd[1500]: 2025-09-04 23:51:56.903 [INFO][4828] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" HandleID="k8s-pod-network.c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Workload="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" Sep 4 23:51:57.176440 containerd[1500]: 2025-09-04 23:51:56.908 [INFO][4776] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Namespace="calico-system" Pod="goldmane-54d579b49d-w4bmm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--w4bmm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4dda5f32-9863-42ac-8d05-03239bb6d11e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-w4bmm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid313b2eb122", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:51:57.176440 containerd[1500]: 2025-09-04 23:51:56.908 [INFO][4776] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Namespace="calico-system" Pod="goldmane-54d579b49d-w4bmm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" Sep 4 23:51:57.176440 containerd[1500]: 2025-09-04 23:51:56.908 [INFO][4776] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid313b2eb122 ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Namespace="calico-system" Pod="goldmane-54d579b49d-w4bmm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" Sep 4 23:51:57.176440 containerd[1500]: 2025-09-04 23:51:56.920 [INFO][4776] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Namespace="calico-system" Pod="goldmane-54d579b49d-w4bmm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" Sep 4 23:51:57.176440 containerd[1500]: 2025-09-04 23:51:56.930 [INFO][4776] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Namespace="calico-system" Pod="goldmane-54d579b49d-w4bmm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--w4bmm-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4dda5f32-9863-42ac-8d05-03239bb6d11e", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a", Pod:"goldmane-54d579b49d-w4bmm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid313b2eb122", MAC:"3a:ce:b3:1c:26:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:51:57.176440 containerd[1500]: 2025-09-04 23:51:57.170 [INFO][4776] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a" Namespace="calico-system" Pod="goldmane-54d579b49d-w4bmm" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--w4bmm-eth0" Sep 4 23:51:57.206655 containerd[1500]: time="2025-09-04T23:51:57.206008638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lr5vb,Uid:eae99b8b-7303-450f-a486-e66279670f06,Namespace:kube-system,Attempt:4,} returns sandbox id \"1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39\"" Sep 4 23:51:57.208336 kubelet[2675]: E0904 23:51:57.208257 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:51:57.212883 containerd[1500]: time="2025-09-04T23:51:57.212688048Z" level=info msg="CreateContainer within sandbox \"1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:51:57.394245 containerd[1500]: time="2025-09-04T23:51:57.394117211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:51:57.394245 containerd[1500]: time="2025-09-04T23:51:57.394190418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:51:57.394245 containerd[1500]: time="2025-09-04T23:51:57.394206729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:57.394511 containerd[1500]: time="2025-09-04T23:51:57.394323159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:51:57.426317 systemd[1]: Started cri-containerd-c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a.scope - libcontainer container c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a. Sep 4 23:51:57.439889 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:51:57.468463 containerd[1500]: time="2025-09-04T23:51:57.468414913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-w4bmm,Uid:4dda5f32-9863-42ac-8d05-03239bb6d11e,Namespace:calico-system,Attempt:4,} returns sandbox id \"c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a\"" Sep 4 23:51:57.469853 containerd[1500]: time="2025-09-04T23:51:57.469812126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 4 23:51:57.692330 systemd-networkd[1431]: cali5e4cebb7de1: Gained IPv6LL Sep 4 23:51:58.076279 systemd-networkd[1431]: calid313b2eb122: Gained IPv6LL Sep 4 23:51:58.942090 kernel: bpftool[5167]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 23:51:59.164887 systemd-networkd[1431]: cali207191856cd: Link UP Sep 4 23:51:59.166360 systemd-networkd[1431]: cali207191856cd: Gained carrier Sep 4 23:51:59.211358 systemd-networkd[1431]: vxlan.calico: Link UP Sep 4 23:51:59.211368 systemd-networkd[1431]: vxlan.calico: Gained carrier Sep 4 23:51:59.422351 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:43174.service - OpenSSH per-connection server daemon (10.0.0.1:43174). Sep 4 23:51:59.456004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335450028.mount: Deactivated successfully. Sep 4 23:51:59.488185 sshd[5208]: Accepted publickey for core from 10.0.0.1 port 43174 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:51:59.490497 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:51:59.499281 systemd-logind[1485]: New session 11 of user core. Sep 4 23:51:59.509267 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:54.959 [INFO][4734] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:55.122 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0 coredns-668d6bf9bc- kube-system 98a345f7-7f13-4446-8508-3763699201e1 945 0 2025-09-04 23:50:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ph5zt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali207191856cd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-ph5zt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ph5zt-" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:55.122 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-ph5zt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4754] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" HandleID="k8s-pod-network.f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Workload="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4754] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" HandleID="k8s-pod-network.f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Workload="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f2d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ph5zt", "timestamp":"2025-09-04 23:51:55.975082656 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:56.903 [INFO][4754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:56.904 [INFO][4754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:56.925 [INFO][4754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" host="localhost" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:57.179 [INFO][4754] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:57.200 [INFO][4754] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:57.202 [INFO][4754] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:57.210 [INFO][4754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:57.210 [INFO][4754] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" host="localhost" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:58.525 [INFO][4754] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0 Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:58.895 [INFO][4754] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" host="localhost" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:59.153 [INFO][4754] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" host="localhost" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:59.153 [INFO][4754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" host="localhost" Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:59.153 [INFO][4754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:51:59.647520 containerd[1500]: 2025-09-04 23:51:59.153 [INFO][4754] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" HandleID="k8s-pod-network.f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Workload="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" Sep 4 23:51:59.719043 containerd[1500]: 2025-09-04 23:51:59.162 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-ph5zt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"98a345f7-7f13-4446-8508-3763699201e1", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ph5zt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali207191856cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:51:59.719043 containerd[1500]: 2025-09-04 23:51:59.162 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-ph5zt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" Sep 4 23:51:59.719043 containerd[1500]: 2025-09-04 23:51:59.162 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali207191856cd ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-ph5zt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" Sep 4 23:51:59.719043 containerd[1500]: 2025-09-04 23:51:59.164 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-ph5zt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" Sep 4 23:51:59.719043 containerd[1500]: 2025-09-04 23:51:59.165 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-ph5zt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"98a345f7-7f13-4446-8508-3763699201e1", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0", Pod:"coredns-668d6bf9bc-ph5zt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali207191856cd", MAC:"12:f3:b6:8b:3e:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:51:59.719043 containerd[1500]: 2025-09-04 23:51:59.645 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-ph5zt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ph5zt-eth0" Sep 4 23:52:00.067473 sshd[5224]: Connection closed by 10.0.0.1 port 43174 Sep 4 23:52:00.069604 systemd-networkd[1431]: calicb41849cb94: Link UP Sep 4 23:52:00.069836 systemd-networkd[1431]: calicb41849cb94: Gained carrier Sep 4 23:52:00.071242 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:00.076286 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:43174.service: Deactivated successfully. Sep 4 23:52:00.080328 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:52:00.082288 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:52:00.083353 systemd-logind[1485]: Removed session 11. Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:55.873 [INFO][4807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:55.934 [INFO][4807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0 calico-kube-controllers-647fc84596- calico-system a8eddde7-2ee6-48c8-8135-d8f649b5e715 941 0 2025-09-04 23:51:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:647fc84596 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-647fc84596-qpbf5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicb41849cb94 [] [] }} ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Namespace="calico-system" Pod="calico-kube-controllers-647fc84596-qpbf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:55.934 [INFO][4807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Namespace="calico-system" Pod="calico-kube-controllers-647fc84596-qpbf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" HandleID="k8s-pod-network.ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Workload="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" HandleID="k8s-pod-network.ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Workload="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-647fc84596-qpbf5", "timestamp":"2025-09-04 23:51:55.975004709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:55.975 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.154 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.154 [INFO][4834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.645 [INFO][4834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" host="localhost" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.651 [INFO][4834] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.655 [INFO][4834] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.659 [INFO][4834] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.662 [INFO][4834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.662 [INFO][4834] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" host="localhost" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.663 [INFO][4834] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:51:59.719 [INFO][4834] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" host="localhost" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:52:00.057 [INFO][4834] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" host="localhost" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:52:00.058 [INFO][4834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" host="localhost" Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:52:00.058 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:52:00.129063 containerd[1500]: 2025-09-04 23:52:00.058 [INFO][4834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" HandleID="k8s-pod-network.ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Workload="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" Sep 4 23:52:00.129849 containerd[1500]: 2025-09-04 23:52:00.062 [INFO][4807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Namespace="calico-system" Pod="calico-kube-controllers-647fc84596-qpbf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0", GenerateName:"calico-kube-controllers-647fc84596-", Namespace:"calico-system", SelfLink:"", UID:"a8eddde7-2ee6-48c8-8135-d8f649b5e715", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"647fc84596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-647fc84596-qpbf5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb41849cb94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:00.129849 containerd[1500]: 2025-09-04 23:52:00.062 [INFO][4807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Namespace="calico-system" Pod="calico-kube-controllers-647fc84596-qpbf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" Sep 4 23:52:00.129849 containerd[1500]: 2025-09-04 23:52:00.062 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb41849cb94 ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Namespace="calico-system" Pod="calico-kube-controllers-647fc84596-qpbf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" Sep 4 23:52:00.129849 containerd[1500]: 2025-09-04 23:52:00.070 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Namespace="calico-system" Pod="calico-kube-controllers-647fc84596-qpbf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" Sep 4 23:52:00.129849 containerd[1500]: 2025-09-04 23:52:00.071 [INFO][4807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Namespace="calico-system" Pod="calico-kube-controllers-647fc84596-qpbf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0", GenerateName:"calico-kube-controllers-647fc84596-", Namespace:"calico-system", SelfLink:"", UID:"a8eddde7-2ee6-48c8-8135-d8f649b5e715", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 51, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"647fc84596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e", Pod:"calico-kube-controllers-647fc84596-qpbf5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb41849cb94", MAC:"02:3d:1e:dd:ac:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:00.129849 containerd[1500]: 2025-09-04 23:52:00.125 [INFO][4807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e" Namespace="calico-system" Pod="calico-kube-controllers-647fc84596-qpbf5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--647fc84596--qpbf5-eth0" Sep 4 23:52:00.182750 containerd[1500]: time="2025-09-04T23:52:00.182667624Z" level=info msg="CreateContainer within sandbox \"1d75bb6fc93bb2bc6c0a171547abdde02aa2355fe6d5fe705b9a1013b5850e39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3708cec67554d4a9ed69377989d4edae35e5c4218fbf9ca1c0f65cfdaef890f\"" Sep 4 23:52:00.186770 containerd[1500]: time="2025-09-04T23:52:00.183343618Z" level=info msg="StartContainer for \"c3708cec67554d4a9ed69377989d4edae35e5c4218fbf9ca1c0f65cfdaef890f\"" Sep 4 23:52:00.221323 systemd[1]: Started cri-containerd-c3708cec67554d4a9ed69377989d4edae35e5c4218fbf9ca1c0f65cfdaef890f.scope - libcontainer container c3708cec67554d4a9ed69377989d4edae35e5c4218fbf9ca1c0f65cfdaef890f. Sep 4 23:52:00.356130 containerd[1500]: time="2025-09-04T23:52:00.351929251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:00.356130 containerd[1500]: time="2025-09-04T23:52:00.351986619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:00.356130 containerd[1500]: time="2025-09-04T23:52:00.352001727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:00.356130 containerd[1500]: time="2025-09-04T23:52:00.352098480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:00.378402 systemd[1]: Started cri-containerd-f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0.scope - libcontainer container f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0. Sep 4 23:52:00.402065 systemd-networkd[1431]: cali6fc014cb968: Link UP Sep 4 23:52:00.403709 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:52:00.405526 systemd-networkd[1431]: cali6fc014cb968: Gained carrier Sep 4 23:52:00.464278 containerd[1500]: time="2025-09-04T23:52:00.464187420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ph5zt,Uid:98a345f7-7f13-4446-8508-3763699201e1,Namespace:kube-system,Attempt:4,} returns sandbox id \"f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0\"" Sep 4 23:52:00.464477 containerd[1500]: time="2025-09-04T23:52:00.464328347Z" level=info msg="StartContainer for \"c3708cec67554d4a9ed69377989d4edae35e5c4218fbf9ca1c0f65cfdaef890f\" returns successfully" Sep 4 23:52:00.465517 kubelet[2675]: E0904 23:52:00.465478 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:00.467772 containerd[1500]: time="2025-09-04T23:52:00.467729616Z" level=info msg="CreateContainer within sandbox \"f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:52:00.479704 kubelet[2675]: E0904 23:52:00.479658 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:51:56.336 [INFO][4926] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:51:56.357 [INFO][4926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t8wcm-eth0 csi-node-driver- calico-system 22e67ab5-9d3d-4526-b31b-64c19a0aca9b 797 0 2025-09-04 23:50:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t8wcm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6fc014cb968 [] [] }} ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Namespace="calico-system" Pod="csi-node-driver-t8wcm" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8wcm-" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:51:56.357 [INFO][4926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Namespace="calico-system" Pod="csi-node-driver-t8wcm" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8wcm-eth0" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:51:57.045 [INFO][5012] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" HandleID="k8s-pod-network.9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Workload="localhost-k8s-csi--node--driver--t8wcm-eth0" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:51:57.045 [INFO][5012] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" HandleID="k8s-pod-network.9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Workload="localhost-k8s-csi--node--driver--t8wcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050ef80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t8wcm", "timestamp":"2025-09-04 23:51:57.045162542 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:51:57.045 [INFO][5012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.059 [INFO][5012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.059 [INFO][5012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.070 [INFO][5012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" host="localhost" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.125 [INFO][5012] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.134 [INFO][5012] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.138 [INFO][5012] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.141 [INFO][5012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.141 [INFO][5012] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" host="localhost" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.143 [INFO][5012] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.338 [INFO][5012] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" host="localhost" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.386 [INFO][5012] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" host="localhost" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.386 [INFO][5012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" host="localhost" Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.386 [INFO][5012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:52:00.595023 containerd[1500]: 2025-09-04 23:52:00.386 [INFO][5012] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" HandleID="k8s-pod-network.9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Workload="localhost-k8s-csi--node--driver--t8wcm-eth0" Sep 4 23:52:00.595749 containerd[1500]: 2025-09-04 23:52:00.393 [INFO][4926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Namespace="calico-system" Pod="csi-node-driver-t8wcm" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8wcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8wcm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22e67ab5-9d3d-4526-b31b-64c19a0aca9b", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t8wcm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6fc014cb968", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:00.595749 containerd[1500]: 2025-09-04 23:52:00.393 [INFO][4926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Namespace="calico-system" Pod="csi-node-driver-t8wcm" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8wcm-eth0" Sep 4 23:52:00.595749 containerd[1500]: 2025-09-04 23:52:00.393 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fc014cb968 ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Namespace="calico-system" Pod="csi-node-driver-t8wcm" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8wcm-eth0" Sep 4 23:52:00.595749 containerd[1500]: 2025-09-04 23:52:00.409 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Namespace="calico-system" Pod="csi-node-driver-t8wcm" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8wcm-eth0" Sep 4 23:52:00.595749 containerd[1500]: 2025-09-04 23:52:00.410 [INFO][4926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Namespace="calico-system" Pod="csi-node-driver-t8wcm" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8wcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8wcm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"22e67ab5-9d3d-4526-b31b-64c19a0aca9b", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc", Pod:"csi-node-driver-t8wcm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6fc014cb968", MAC:"ae:85:e6:bd:c0:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:00.595749 containerd[1500]: 2025-09-04 23:52:00.588 [INFO][4926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc" Namespace="calico-system" Pod="csi-node-driver-t8wcm" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8wcm-eth0" Sep 4 23:52:00.644511 containerd[1500]: time="2025-09-04T23:52:00.643827316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:00.644511 containerd[1500]: time="2025-09-04T23:52:00.643893281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:00.644511 containerd[1500]: time="2025-09-04T23:52:00.643906355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:00.644511 containerd[1500]: time="2025-09-04T23:52:00.644004029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:00.677300 systemd[1]: Started cri-containerd-ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e.scope - libcontainer container ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e. Sep 4 23:52:00.694820 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:52:00.731738 containerd[1500]: time="2025-09-04T23:52:00.731658040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-647fc84596-qpbf5,Uid:a8eddde7-2ee6-48c8-8135-d8f649b5e715,Namespace:calico-system,Attempt:4,} returns sandbox id \"ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e\"" Sep 4 23:52:00.828331 systemd-networkd[1431]: cali207191856cd: Gained IPv6LL Sep 4 23:52:00.893360 systemd-networkd[1431]: vxlan.calico: Gained IPv6LL Sep 4 23:52:01.024008 systemd-networkd[1431]: calie1222bc24db: Link UP Sep 4 23:52:01.025648 systemd-networkd[1431]: calie1222bc24db: Gained carrier Sep 4 23:52:01.257431 kubelet[2675]: I0904 23:52:01.257337 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lr5vb" podStartSLOduration=99.257310069 podStartE2EDuration="1m39.257310069s" podCreationTimestamp="2025-09-04 23:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:52:00.917699931 +0000 UTC m=+110.895620231" watchObservedRunningTime="2025-09-04 23:52:01.257310069 +0000 UTC m=+111.235230369" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:51:56.734 [INFO][4991] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:51:57.168 [INFO][4991] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--57d55db56d--ts945-eth0 whisker-57d55db56d- calico-system 210f206c-d2e9-4f37-8e9c-bb39c462e3a5 1147 0 2025-09-04 23:51:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57d55db56d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-57d55db56d-ts945 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie1222bc24db [] [] }} ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Namespace="calico-system" Pod="whisker-57d55db56d-ts945" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:51:57.168 [INFO][4991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Namespace="calico-system" Pod="whisker-57d55db56d-ts945" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:51:57.234 [INFO][5071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:51:57.237 [INFO][5071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-57d55db56d-ts945", "timestamp":"2025-09-04 23:51:57.233971875 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:51:57.237 [INFO][5071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.386 [INFO][5071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.386 [INFO][5071] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.406 [INFO][5071] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" host="localhost" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.418 [INFO][5071] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.584 [INFO][5071] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.591 [INFO][5071] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.662 [INFO][5071] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.663 [INFO][5071] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" host="localhost" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.938 [INFO][5071] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410 Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:00.976 [INFO][5071] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" host="localhost" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:01.013 [INFO][5071] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" host="localhost" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:01.014 [INFO][5071] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" host="localhost" Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:01.014 [INFO][5071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:52:01.268602 containerd[1500]: 2025-09-04 23:52:01.014 [INFO][5071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:01.269305 containerd[1500]: 2025-09-04 23:52:01.019 [INFO][4991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Namespace="calico-system" Pod="whisker-57d55db56d-ts945" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57d55db56d--ts945-eth0", GenerateName:"whisker-57d55db56d-", Namespace:"calico-system", SelfLink:"", UID:"210f206c-d2e9-4f37-8e9c-bb39c462e3a5", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 51, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d55db56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-57d55db56d-ts945", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1222bc24db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:01.269305 containerd[1500]: 2025-09-04 23:52:01.019 [INFO][4991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Namespace="calico-system" Pod="whisker-57d55db56d-ts945" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:01.269305 containerd[1500]: 2025-09-04 23:52:01.020 [INFO][4991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1222bc24db ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Namespace="calico-system" Pod="whisker-57d55db56d-ts945" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:01.269305 containerd[1500]: 2025-09-04 23:52:01.026 [INFO][4991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Namespace="calico-system" Pod="whisker-57d55db56d-ts945" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:01.269305 containerd[1500]: 2025-09-04 23:52:01.027 [INFO][4991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Namespace="calico-system" Pod="whisker-57d55db56d-ts945" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57d55db56d--ts945-eth0", GenerateName:"whisker-57d55db56d-", Namespace:"calico-system", SelfLink:"", UID:"210f206c-d2e9-4f37-8e9c-bb39c462e3a5", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 51, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57d55db56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410", Pod:"whisker-57d55db56d-ts945", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie1222bc24db", MAC:"92:2d:13:5b:a6:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:01.269305 containerd[1500]: 2025-09-04 23:52:01.264 [INFO][4991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Namespace="calico-system" Pod="whisker-57d55db56d-ts945" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:01.489471 kubelet[2675]: E0904 23:52:01.489295 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:01.492319 containerd[1500]: time="2025-09-04T23:52:01.492124003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:01.492319 containerd[1500]: time="2025-09-04T23:52:01.492266150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:01.492319 containerd[1500]: time="2025-09-04T23:52:01.492288542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:01.493345 containerd[1500]: time="2025-09-04T23:52:01.493230898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:01.528300 systemd[1]: Started cri-containerd-9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc.scope - libcontainer container 9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc. Sep 4 23:52:01.532587 systemd-networkd[1431]: cali6fc014cb968: Gained IPv6LL Sep 4 23:52:01.545973 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:52:01.562203 containerd[1500]: time="2025-09-04T23:52:01.562141559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8wcm,Uid:22e67ab5-9d3d-4526-b31b-64c19a0aca9b,Namespace:calico-system,Attempt:4,} returns sandbox id \"9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc\"" Sep 4 23:52:01.584237 containerd[1500]: time="2025-09-04T23:52:01.583361338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:01.584237 containerd[1500]: time="2025-09-04T23:52:01.584187415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:01.584237 containerd[1500]: time="2025-09-04T23:52:01.584204697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:01.584441 containerd[1500]: time="2025-09-04T23:52:01.584308513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:01.623332 systemd[1]: Started cri-containerd-a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410.scope - libcontainer container a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410. Sep 4 23:52:01.628261 systemd-networkd[1431]: cali0ef7f218ee1: Link UP Sep 4 23:52:01.634668 systemd-networkd[1431]: cali0ef7f218ee1: Gained carrier Sep 4 23:52:01.666377 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:52:01.700634 containerd[1500]: time="2025-09-04T23:52:01.700520212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d55db56d-ts945,Uid:210f206c-d2e9-4f37-8e9c-bb39c462e3a5,Namespace:calico-system,Attempt:4,} returns sandbox id \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\"" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:51:56.629 [INFO][4978] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:51:57.172 [INFO][4978] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0 calico-apiserver-5f5f7b99c- calico-apiserver 518a1d0b-97e3-468a-b085-a8ed59e3b9de 948 0 2025-09-04 23:50:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f5f7b99c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f5f7b99c-t4zg6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ef7f218ee1 [] [] }} ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-t4zg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:51:57.172 [INFO][4978] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-t4zg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:51:57.251 [INFO][5077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" HandleID="k8s-pod-network.c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Workload="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:51:57.251 [INFO][5077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" HandleID="k8s-pod-network.c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Workload="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f4e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f5f7b99c-t4zg6", "timestamp":"2025-09-04 23:51:57.251124466 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:51:57.251 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.014 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.014 [INFO][5077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.255 [INFO][5077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" host="localhost" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.284 [INFO][5077] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.292 [INFO][5077] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.296 [INFO][5077] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.299 [INFO][5077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.300 [INFO][5077] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" host="localhost" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.303 [INFO][5077] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63 Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.456 [INFO][5077] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" host="localhost" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.599 [INFO][5077] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" host="localhost" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.599 [INFO][5077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" host="localhost" Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.599 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:52:01.757721 containerd[1500]: 2025-09-04 23:52:01.600 [INFO][5077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" HandleID="k8s-pod-network.c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Workload="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" Sep 4 23:52:01.759449 containerd[1500]: 2025-09-04 23:52:01.615 [INFO][4978] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-t4zg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0", GenerateName:"calico-apiserver-5f5f7b99c-", Namespace:"calico-apiserver", SelfLink:"", UID:"518a1d0b-97e3-468a-b085-a8ed59e3b9de", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5f7b99c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f5f7b99c-t4zg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ef7f218ee1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:01.759449 containerd[1500]: 2025-09-04 23:52:01.616 [INFO][4978] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-t4zg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" Sep 4 23:52:01.759449 containerd[1500]: 2025-09-04 23:52:01.616 [INFO][4978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ef7f218ee1 ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-t4zg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" Sep 4 23:52:01.759449 containerd[1500]: 2025-09-04 23:52:01.639 [INFO][4978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-t4zg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" Sep 4 23:52:01.759449 containerd[1500]: 2025-09-04 23:52:01.643 [INFO][4978] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-t4zg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0", GenerateName:"calico-apiserver-5f5f7b99c-", Namespace:"calico-apiserver", SelfLink:"", UID:"518a1d0b-97e3-468a-b085-a8ed59e3b9de", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5f7b99c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63", Pod:"calico-apiserver-5f5f7b99c-t4zg6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ef7f218ee1", MAC:"f2:55:a8:07:fd:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:01.759449 containerd[1500]: 2025-09-04 23:52:01.750 [INFO][4978] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-t4zg6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--t4zg6-eth0" Sep 4 23:52:01.777146 containerd[1500]: time="2025-09-04T23:52:01.777081822Z" level=info msg="CreateContainer within sandbox \"f7dfe4d746146d35f5c2c3f17a9863b6914d7fe8c17499771228b258054979d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f9ebd4b85e50d983aaad90962b4654885e16504a2c0253bae556594da7d6b48\"" Sep 4 23:52:01.779590 containerd[1500]: time="2025-09-04T23:52:01.779541977Z" level=info msg="StartContainer for \"9f9ebd4b85e50d983aaad90962b4654885e16504a2c0253bae556594da7d6b48\"" Sep 4 23:52:01.831892 systemd-networkd[1431]: cali499dcc86165: Link UP Sep 4 23:52:01.834905 systemd-networkd[1431]: cali499dcc86165: Gained carrier Sep 4 23:52:01.842367 systemd[1]: Started cri-containerd-9f9ebd4b85e50d983aaad90962b4654885e16504a2c0253bae556594da7d6b48.scope - libcontainer container 9f9ebd4b85e50d983aaad90962b4654885e16504a2c0253bae556594da7d6b48. Sep 4 23:52:01.861069 containerd[1500]: time="2025-09-04T23:52:01.858822461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:01.861069 containerd[1500]: time="2025-09-04T23:52:01.858908282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:01.861069 containerd[1500]: time="2025-09-04T23:52:01.858927920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:01.862433 containerd[1500]: time="2025-09-04T23:52:01.862309993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:51:56.549 [INFO][4963] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:51:56.927 [INFO][4963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0 calico-apiserver-5f5f7b99c- calico-apiserver a94fd484-c0f8-40c8-aa5a-0e730f68f3a7 950 0 2025-09-04 23:50:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f5f7b99c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f5f7b99c-lbsxp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali499dcc86165 [] [] }} ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-lbsxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:51:56.928 [INFO][4963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-lbsxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:51:57.298 [INFO][5057] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" HandleID="k8s-pod-network.a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Workload="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:51:57.298 [INFO][5057] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" HandleID="k8s-pod-network.a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Workload="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d6920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f5f7b99c-lbsxp", "timestamp":"2025-09-04 23:51:57.298056638 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:51:57.298 [INFO][5057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.599 [INFO][5057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.600 [INFO][5057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.631 [INFO][5057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" host="localhost" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.753 [INFO][5057] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.765 [INFO][5057] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.769 [INFO][5057] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.773 [INFO][5057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.773 [INFO][5057] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" host="localhost" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.775 [INFO][5057] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4 Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.784 [INFO][5057] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" host="localhost" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.801 [INFO][5057] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" host="localhost" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.801 [INFO][5057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" host="localhost" Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.801 [INFO][5057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:52:01.896635 containerd[1500]: 2025-09-04 23:52:01.801 [INFO][5057] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" HandleID="k8s-pod-network.a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Workload="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" Sep 4 23:52:01.897912 containerd[1500]: 2025-09-04 23:52:01.810 [INFO][4963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-lbsxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0", GenerateName:"calico-apiserver-5f5f7b99c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a94fd484-c0f8-40c8-aa5a-0e730f68f3a7", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5f7b99c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f5f7b99c-lbsxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali499dcc86165", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:01.897912 containerd[1500]: 2025-09-04 23:52:01.810 [INFO][4963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-lbsxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" Sep 4 23:52:01.897912 containerd[1500]: 2025-09-04 23:52:01.810 [INFO][4963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali499dcc86165 ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-lbsxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" Sep 4 23:52:01.897912 containerd[1500]: 2025-09-04 23:52:01.833 [INFO][4963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-lbsxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" Sep 4 23:52:01.897912 containerd[1500]: 2025-09-04 23:52:01.843 [INFO][4963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-lbsxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0", GenerateName:"calico-apiserver-5f5f7b99c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a94fd484-c0f8-40c8-aa5a-0e730f68f3a7", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.September, 4, 23, 50, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f5f7b99c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4", Pod:"calico-apiserver-5f5f7b99c-lbsxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali499dcc86165", MAC:"fa:fa:22:07:26:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 4 23:52:01.897912 containerd[1500]: 2025-09-04 23:52:01.887 [INFO][4963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4" Namespace="calico-apiserver" Pod="calico-apiserver-5f5f7b99c-lbsxp" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f5f7b99c--lbsxp-eth0" Sep 4 23:52:01.931585 systemd[1]: Started cri-containerd-c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63.scope - libcontainer container c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63. Sep 4 23:52:01.955279 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:52:01.983155 systemd-networkd[1431]: calicb41849cb94: Gained IPv6LL Sep 4 23:52:02.012877 containerd[1500]: time="2025-09-04T23:52:02.011991123Z" level=info msg="StartContainer for \"9f9ebd4b85e50d983aaad90962b4654885e16504a2c0253bae556594da7d6b48\" returns successfully" Sep 4 23:52:02.013355 containerd[1500]: time="2025-09-04T23:52:02.011986545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-t4zg6,Uid:518a1d0b-97e3-468a-b085-a8ed59e3b9de,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63\"" Sep 4 23:52:02.086161 containerd[1500]: time="2025-09-04T23:52:02.085750095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:52:02.086161 containerd[1500]: time="2025-09-04T23:52:02.085833261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:52:02.086161 containerd[1500]: time="2025-09-04T23:52:02.085847387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:02.086161 containerd[1500]: time="2025-09-04T23:52:02.085992421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:52:02.118349 systemd[1]: Started cri-containerd-a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4.scope - libcontainer container a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4. Sep 4 23:52:02.135531 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 23:52:02.185590 containerd[1500]: time="2025-09-04T23:52:02.185536993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f5f7b99c-lbsxp,Uid:a94fd484-c0f8-40c8-aa5a-0e730f68f3a7,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4\"" Sep 4 23:52:02.300262 systemd-networkd[1431]: calie1222bc24db: Gained IPv6LL Sep 4 23:52:02.496624 kubelet[2675]: E0904 23:52:02.496549 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:03.445170 kubelet[2675]: I0904 23:52:03.444966 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ph5zt" podStartSLOduration=101.444941649 podStartE2EDuration="1m41.444941649s" podCreationTimestamp="2025-09-04 23:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:52:03.092272031 +0000 UTC m=+113.070192331" watchObservedRunningTime="2025-09-04 23:52:03.444941649 +0000 UTC m=+113.422861949" Sep 4 23:52:03.514844 kubelet[2675]: E0904 23:52:03.514809 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:03.580319 systemd-networkd[1431]: cali0ef7f218ee1: Gained IPv6LL Sep 4 23:52:03.644248 systemd-networkd[1431]: cali499dcc86165: Gained IPv6LL Sep 4 23:52:04.437369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652338750.mount: Deactivated successfully. Sep 4 23:52:04.517995 kubelet[2675]: E0904 23:52:04.517935 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:05.095536 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:42156.service - OpenSSH per-connection server daemon (10.0.0.1:42156). Sep 4 23:52:05.520117 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 42156 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:05.523104 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:05.528828 systemd-logind[1485]: New session 12 of user core. Sep 4 23:52:05.537351 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:52:05.707362 containerd[1500]: time="2025-09-04T23:52:05.707268719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:05.709146 containerd[1500]: time="2025-09-04T23:52:05.709061889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 4 23:52:05.711020 containerd[1500]: time="2025-09-04T23:52:05.710988708Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:05.715552 sshd[5666]: Connection closed by 10.0.0.1 port 42156 Sep 4 23:52:05.716091 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:05.719248 containerd[1500]: time="2025-09-04T23:52:05.719146419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:05.720774 containerd[1500]: time="2025-09-04T23:52:05.720713832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 8.250858945s" Sep 4 23:52:05.720889 containerd[1500]: time="2025-09-04T23:52:05.720780147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 4 23:52:05.722222 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:52:05.722594 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:42156.service: Deactivated successfully. Sep 4 23:52:05.723768 containerd[1500]: time="2025-09-04T23:52:05.723700680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 4 23:52:05.725729 containerd[1500]: time="2025-09-04T23:52:05.725273463Z" level=info msg="CreateContainer within sandbox \"c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 4 23:52:05.727599 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:52:05.729622 systemd-logind[1485]: Removed session 12. Sep 4 23:52:05.748033 containerd[1500]: time="2025-09-04T23:52:05.747970277Z" level=info msg="CreateContainer within sandbox \"c706784c78dfd024f99f902ccc28b4a13f894bf9e906869b1b6c4fd956755e7a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a45898d5b3836b2614b63c3627b728d59b625764bebf0b479ad477614d3e9b24\"" Sep 4 23:52:05.748882 containerd[1500]: time="2025-09-04T23:52:05.748725520Z" level=info msg="StartContainer for \"a45898d5b3836b2614b63c3627b728d59b625764bebf0b479ad477614d3e9b24\"" Sep 4 23:52:05.792428 systemd[1]: Started cri-containerd-a45898d5b3836b2614b63c3627b728d59b625764bebf0b479ad477614d3e9b24.scope - libcontainer container a45898d5b3836b2614b63c3627b728d59b625764bebf0b479ad477614d3e9b24. Sep 4 23:52:05.844706 containerd[1500]: time="2025-09-04T23:52:05.844554856Z" level=info msg="StartContainer for \"a45898d5b3836b2614b63c3627b728d59b625764bebf0b479ad477614d3e9b24\" returns successfully" Sep 4 23:52:06.641484 kubelet[2675]: I0904 23:52:06.641375 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-w4bmm" podStartSLOduration=60.388286417 podStartE2EDuration="1m8.64135771s" podCreationTimestamp="2025-09-04 23:50:58 +0000 UTC" firstStartedPulling="2025-09-04 23:51:57.469467798 +0000 UTC m=+107.447388098" lastFinishedPulling="2025-09-04 23:52:05.722539091 +0000 UTC m=+115.700459391" observedRunningTime="2025-09-04 23:52:06.640876674 +0000 UTC m=+116.618796974" watchObservedRunningTime="2025-09-04 23:52:06.64135771 +0000 UTC m=+116.619278010" Sep 4 23:52:10.265265 containerd[1500]: time="2025-09-04T23:52:10.265219817Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\"" Sep 4 23:52:10.265842 containerd[1500]: time="2025-09-04T23:52:10.265352016Z" level=info msg="TearDown network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" successfully" Sep 4 23:52:10.265842 containerd[1500]: time="2025-09-04T23:52:10.265363748Z" level=info msg="StopPodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" returns successfully" Sep 4 23:52:10.265842 containerd[1500]: time="2025-09-04T23:52:10.265784601Z" level=info msg="RemovePodSandbox for \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\"" Sep 4 23:52:10.269618 containerd[1500]: time="2025-09-04T23:52:10.269563910Z" level=info msg="Forcibly stopping sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\"" Sep 4 23:52:10.269779 containerd[1500]: time="2025-09-04T23:52:10.269718020Z" level=info msg="TearDown network for sandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" successfully" Sep 4 23:52:10.729769 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:45084.service - OpenSSH per-connection server daemon (10.0.0.1:45084). Sep 4 23:52:10.796970 sshd[5813]: Accepted publickey for core from 10.0.0.1 port 45084 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:10.799092 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:10.804897 systemd-logind[1485]: New session 13 of user core. Sep 4 23:52:10.818389 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:52:10.979889 sshd[5815]: Connection closed by 10.0.0.1 port 45084 Sep 4 23:52:10.980276 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:10.985029 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:45084.service: Deactivated successfully. Sep 4 23:52:10.987755 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:52:10.988752 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:52:10.989723 systemd-logind[1485]: Removed session 13. Sep 4 23:52:11.010256 containerd[1500]: time="2025-09-04T23:52:11.010164012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:11.010436 containerd[1500]: time="2025-09-04T23:52:11.010273368Z" level=info msg="RemovePodSandbox \"d552be0957bb5de99900719e8adeef8f3d29e3b805da84f98ebd5011b6c00c18\" returns successfully" Sep 4 23:52:11.011029 containerd[1500]: time="2025-09-04T23:52:11.010976983Z" level=info msg="StopPodSandbox for \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\"" Sep 4 23:52:11.011199 containerd[1500]: time="2025-09-04T23:52:11.011138577Z" level=info msg="TearDown network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" successfully" Sep 4 23:52:11.011199 containerd[1500]: time="2025-09-04T23:52:11.011151301Z" level=info msg="StopPodSandbox for \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" returns successfully" Sep 4 23:52:11.011497 containerd[1500]: time="2025-09-04T23:52:11.011468779Z" level=info msg="RemovePodSandbox for \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\"" Sep 4 23:52:11.011566 containerd[1500]: time="2025-09-04T23:52:11.011498064Z" level=info msg="Forcibly stopping sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\"" Sep 4 23:52:11.011657 containerd[1500]: time="2025-09-04T23:52:11.011610897Z" level=info msg="TearDown network for sandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" successfully" Sep 4 23:52:11.491642 kubelet[2675]: E0904 23:52:11.491583 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:11.750463 containerd[1500]: time="2025-09-04T23:52:11.750235107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:11.852008 containerd[1500]: time="2025-09-04T23:52:11.851932824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:11.852008 containerd[1500]: time="2025-09-04T23:52:11.852072798Z" level=info msg="RemovePodSandbox \"2aacf1540ac827a9743bbc812453ed33cdcd11ba2ec47b1c142708b1b60af517\" returns successfully" Sep 4 23:52:11.852635 containerd[1500]: time="2025-09-04T23:52:11.852607105Z" level=info msg="StopPodSandbox for \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\"" Sep 4 23:52:11.886523 containerd[1500]: time="2025-09-04T23:52:11.852739674Z" level=info msg="TearDown network for sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\" successfully" Sep 4 23:52:11.886523 containerd[1500]: time="2025-09-04T23:52:11.852756225Z" level=info msg="StopPodSandbox for \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\" returns successfully" Sep 4 23:52:11.886523 containerd[1500]: time="2025-09-04T23:52:11.853070107Z" level=info msg="RemovePodSandbox for \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\"" Sep 4 23:52:11.886523 containerd[1500]: time="2025-09-04T23:52:11.853094883Z" level=info msg="Forcibly stopping sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\"" Sep 4 23:52:11.886523 containerd[1500]: time="2025-09-04T23:52:11.853194261Z" level=info msg="TearDown network for sandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\" successfully" Sep 4 23:52:11.958296 containerd[1500]: time="2025-09-04T23:52:11.958183147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 4 23:52:12.056492 containerd[1500]: time="2025-09-04T23:52:12.056398792Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:12.096769 containerd[1500]: time="2025-09-04T23:52:12.096653426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:12.096769 containerd[1500]: time="2025-09-04T23:52:12.096778262Z" level=info msg="RemovePodSandbox \"b0e84c464a3121fda4ef17c3309457ce25ba74965b8319b05c9ced9003856e07\" returns successfully" Sep 4 23:52:12.097566 containerd[1500]: time="2025-09-04T23:52:12.097513456Z" level=info msg="StopPodSandbox for \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\"" Sep 4 23:52:12.097750 containerd[1500]: time="2025-09-04T23:52:12.097720817Z" level=info msg="TearDown network for sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\" successfully" Sep 4 23:52:12.097750 containerd[1500]: time="2025-09-04T23:52:12.097743109Z" level=info msg="StopPodSandbox for \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\" returns successfully" Sep 4 23:52:12.098422 containerd[1500]: time="2025-09-04T23:52:12.098382573Z" level=info msg="RemovePodSandbox for \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\"" Sep 4 23:52:12.098422 containerd[1500]: time="2025-09-04T23:52:12.098418822Z" level=info msg="Forcibly stopping sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\"" Sep 4 23:52:12.098598 containerd[1500]: time="2025-09-04T23:52:12.098530051Z" level=info msg="TearDown network for sandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\" successfully" Sep 4 23:52:12.190195 containerd[1500]: time="2025-09-04T23:52:12.190097600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:12.191629 containerd[1500]: time="2025-09-04T23:52:12.191578790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 6.467824479s" Sep 4 23:52:12.191691 containerd[1500]: time="2025-09-04T23:52:12.191634965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 4 23:52:12.193658 containerd[1500]: time="2025-09-04T23:52:12.193599245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 4 23:52:12.219788 containerd[1500]: time="2025-09-04T23:52:12.219681244Z" level=info msg="CreateContainer within sandbox \"ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 23:52:12.482217 containerd[1500]: time="2025-09-04T23:52:12.481860401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:12.482217 containerd[1500]: time="2025-09-04T23:52:12.482164154Z" level=info msg="RemovePodSandbox \"f4259fe11144e2cf6e26c83ddd1b2ffa26f009e246b372560b60fba823ebd8da\" returns successfully" Sep 4 23:52:12.483553 containerd[1500]: time="2025-09-04T23:52:12.483493187Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\"" Sep 4 23:52:12.483784 containerd[1500]: time="2025-09-04T23:52:12.483733599Z" level=info msg="TearDown network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" successfully" Sep 4 23:52:12.483932 containerd[1500]: time="2025-09-04T23:52:12.483781610Z" level=info msg="StopPodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" returns successfully" Sep 4 23:52:12.484448 containerd[1500]: time="2025-09-04T23:52:12.484383955Z" level=info msg="RemovePodSandbox for \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\"" Sep 4 23:52:12.484596 containerd[1500]: time="2025-09-04T23:52:12.484482170Z" level=info msg="Forcibly stopping sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\"" Sep 4 23:52:12.485336 containerd[1500]: time="2025-09-04T23:52:12.484638775Z" level=info msg="TearDown network for sandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" successfully" Sep 4 23:52:12.781090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761007902.mount: Deactivated successfully. Sep 4 23:52:12.862854 containerd[1500]: time="2025-09-04T23:52:12.862776504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:12.863439 containerd[1500]: time="2025-09-04T23:52:12.862898143Z" level=info msg="RemovePodSandbox \"827f53fc873a0e338d9f52f8032cc94b1a3e18ddaf7c490eff8ca3e23e804961\" returns successfully" Sep 4 23:52:12.863565 containerd[1500]: time="2025-09-04T23:52:12.863535624Z" level=info msg="StopPodSandbox for \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\"" Sep 4 23:52:12.863668 containerd[1500]: time="2025-09-04T23:52:12.863651592Z" level=info msg="TearDown network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" successfully" Sep 4 23:52:12.863668 containerd[1500]: time="2025-09-04T23:52:12.863665088Z" level=info msg="StopPodSandbox for \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" returns successfully" Sep 4 23:52:12.863961 containerd[1500]: time="2025-09-04T23:52:12.863932892Z" level=info msg="RemovePodSandbox for \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\"" Sep 4 23:52:12.864011 containerd[1500]: time="2025-09-04T23:52:12.863969121Z" level=info msg="Forcibly stopping sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\"" Sep 4 23:52:12.864144 containerd[1500]: time="2025-09-04T23:52:12.864090279Z" level=info msg="TearDown network for sandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" successfully" Sep 4 23:52:13.492082 containerd[1500]: time="2025-09-04T23:52:13.491951612Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:13.492265 containerd[1500]: time="2025-09-04T23:52:13.492127863Z" level=info msg="RemovePodSandbox \"8a5087389c49a2cad02ae3bc97a6642557caa1eb2f3cec9ac09e2ad4b7602c07\" returns successfully" Sep 4 23:52:13.492813 containerd[1500]: time="2025-09-04T23:52:13.492775704Z" level=info msg="StopPodSandbox for \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\"" Sep 4 23:52:13.492983 containerd[1500]: time="2025-09-04T23:52:13.492945914Z" level=info msg="TearDown network for sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\" successfully" Sep 4 23:52:13.492983 containerd[1500]: time="2025-09-04T23:52:13.492969459Z" level=info msg="StopPodSandbox for \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\" returns successfully" Sep 4 23:52:13.493446 containerd[1500]: time="2025-09-04T23:52:13.493386734Z" level=info msg="RemovePodSandbox for \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\"" Sep 4 23:52:13.493446 containerd[1500]: time="2025-09-04T23:52:13.493440065Z" level=info msg="Forcibly stopping sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\"" Sep 4 23:52:13.493720 containerd[1500]: time="2025-09-04T23:52:13.493579919Z" level=info msg="TearDown network for sandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\" successfully" Sep 4 23:52:13.495439 containerd[1500]: time="2025-09-04T23:52:13.495388304Z" level=info msg="CreateContainer within sandbox \"ee8e69ad23f57e1998ce8020bb1efa6fd4af61baf4eae2862d4d560f5990318e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f434bce77922384ea8fa97ffd4a9db41743cbf7943ceef4a6cf1eeb048f313a7\"" Sep 4 23:52:13.496757 containerd[1500]: time="2025-09-04T23:52:13.496701689Z" level=info msg="StartContainer for \"f434bce77922384ea8fa97ffd4a9db41743cbf7943ceef4a6cf1eeb048f313a7\"" Sep 4 23:52:13.535250 systemd[1]: Started cri-containerd-f434bce77922384ea8fa97ffd4a9db41743cbf7943ceef4a6cf1eeb048f313a7.scope - libcontainer container f434bce77922384ea8fa97ffd4a9db41743cbf7943ceef4a6cf1eeb048f313a7. Sep 4 23:52:13.545265 containerd[1500]: time="2025-09-04T23:52:13.545194989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:13.545485 containerd[1500]: time="2025-09-04T23:52:13.545302572Z" level=info msg="RemovePodSandbox \"941f66dcdcf2714fb62bc0efa9e9e49a79e4c87fb82d33f12118a8fbb6b0c0f1\" returns successfully" Sep 4 23:52:13.545930 containerd[1500]: time="2025-09-04T23:52:13.545903774Z" level=info msg="StopPodSandbox for \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\"" Sep 4 23:52:13.546083 containerd[1500]: time="2025-09-04T23:52:13.546061952Z" level=info msg="TearDown network for sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\" successfully" Sep 4 23:52:13.546083 containerd[1500]: time="2025-09-04T23:52:13.546080336Z" level=info msg="StopPodSandbox for \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\" returns successfully" Sep 4 23:52:13.546423 containerd[1500]: time="2025-09-04T23:52:13.546367327Z" level=info msg="RemovePodSandbox for \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\"" Sep 4 23:52:13.546423 containerd[1500]: time="2025-09-04T23:52:13.546407242Z" level=info msg="Forcibly stopping sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\"" Sep 4 23:52:13.546704 containerd[1500]: time="2025-09-04T23:52:13.546535303Z" level=info msg="TearDown network for sandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\" successfully" Sep 4 23:52:13.634597 containerd[1500]: time="2025-09-04T23:52:13.634530357Z" level=info msg="StartContainer for \"f434bce77922384ea8fa97ffd4a9db41743cbf7943ceef4a6cf1eeb048f313a7\" returns successfully" Sep 4 23:52:13.635271 containerd[1500]: time="2025-09-04T23:52:13.635197294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:13.635271 containerd[1500]: time="2025-09-04T23:52:13.635252718Z" level=info msg="RemovePodSandbox \"771b25277a19ea380978650a3efe8598e52d04264d660aea1ad1d7784dd4b3bf\" returns successfully" Sep 4 23:52:13.636525 containerd[1500]: time="2025-09-04T23:52:13.636400710Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\"" Sep 4 23:52:13.636593 containerd[1500]: time="2025-09-04T23:52:13.636556393Z" level=info msg="TearDown network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" successfully" Sep 4 23:52:13.636593 containerd[1500]: time="2025-09-04T23:52:13.636567574Z" level=info msg="StopPodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" returns successfully" Sep 4 23:52:13.637556 containerd[1500]: time="2025-09-04T23:52:13.637477959Z" level=info msg="RemovePodSandbox for \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\"" Sep 4 23:52:13.637556 containerd[1500]: time="2025-09-04T23:52:13.637522593Z" level=info msg="Forcibly stopping sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\"" Sep 4 23:52:13.639598 containerd[1500]: time="2025-09-04T23:52:13.637660393Z" level=info msg="TearDown network for sandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" successfully" Sep 4 23:52:13.677733 containerd[1500]: time="2025-09-04T23:52:13.677511514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:13.677733 containerd[1500]: time="2025-09-04T23:52:13.677594862Z" level=info msg="RemovePodSandbox \"3335a05213e5cf53a2af929f30d1221f794ec04b276a0a4c4003280a9e9e78de\" returns successfully" Sep 4 23:52:13.678477 containerd[1500]: time="2025-09-04T23:52:13.678436256Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\"" Sep 4 23:52:13.678648 containerd[1500]: time="2025-09-04T23:52:13.678593643Z" level=info msg="TearDown network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" successfully" Sep 4 23:52:13.678648 containerd[1500]: time="2025-09-04T23:52:13.678613741Z" level=info msg="StopPodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" returns successfully" Sep 4 23:52:13.679125 containerd[1500]: time="2025-09-04T23:52:13.679090619Z" level=info msg="RemovePodSandbox for \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\"" Sep 4 23:52:13.679184 containerd[1500]: time="2025-09-04T23:52:13.679130644Z" level=info msg="Forcibly stopping sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\"" Sep 4 23:52:13.679278 containerd[1500]: time="2025-09-04T23:52:13.679228198Z" level=info msg="TearDown network for sandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" successfully" Sep 4 23:52:13.727149 containerd[1500]: time="2025-09-04T23:52:13.727028824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:13.727149 containerd[1500]: time="2025-09-04T23:52:13.727144722Z" level=info msg="RemovePodSandbox \"964694f0d23b56842acd80c302b997ba0db4a59ffaa487d3335fb389185fbffe\" returns successfully" Sep 4 23:52:13.730880 containerd[1500]: time="2025-09-04T23:52:13.727781652Z" level=info msg="StopPodSandbox for \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\"" Sep 4 23:52:13.730880 containerd[1500]: time="2025-09-04T23:52:13.727981227Z" level=info msg="TearDown network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" successfully" Sep 4 23:52:13.730880 containerd[1500]: time="2025-09-04T23:52:13.728001976Z" level=info msg="StopPodSandbox for \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" returns successfully" Sep 4 23:52:13.730880 containerd[1500]: time="2025-09-04T23:52:13.728394486Z" level=info msg="RemovePodSandbox for \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\"" Sep 4 23:52:13.730880 containerd[1500]: time="2025-09-04T23:52:13.728431135Z" level=info msg="Forcibly stopping sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\"" Sep 4 23:52:13.730880 containerd[1500]: time="2025-09-04T23:52:13.728541693Z" level=info msg="TearDown network for sandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" successfully" Sep 4 23:52:13.776116 containerd[1500]: time="2025-09-04T23:52:13.776018458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:13.776116 containerd[1500]: time="2025-09-04T23:52:13.776126472Z" level=info msg="RemovePodSandbox \"85f3ca1634b3c8fe61144c477d124da9f1d73bec7c85eda388239d34711223ed\" returns successfully" Sep 4 23:52:13.778152 containerd[1500]: time="2025-09-04T23:52:13.778115067Z" level=info msg="StopPodSandbox for \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\"" Sep 4 23:52:13.778457 containerd[1500]: time="2025-09-04T23:52:13.778352525Z" level=info msg="TearDown network for sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\" successfully" Sep 4 23:52:13.778457 containerd[1500]: time="2025-09-04T23:52:13.778429290Z" level=info msg="StopPodSandbox for \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\" returns successfully" Sep 4 23:52:13.779931 containerd[1500]: time="2025-09-04T23:52:13.778799707Z" level=info msg="RemovePodSandbox for \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\"" Sep 4 23:52:13.779931 containerd[1500]: time="2025-09-04T23:52:13.778828942Z" level=info msg="Forcibly stopping sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\"" Sep 4 23:52:13.779931 containerd[1500]: time="2025-09-04T23:52:13.778946233Z" level=info msg="TearDown network for sandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\" successfully" Sep 4 23:52:14.091148 containerd[1500]: time="2025-09-04T23:52:14.090897657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:14.091148 containerd[1500]: time="2025-09-04T23:52:14.090999519Z" level=info msg="RemovePodSandbox \"766b47e4aec6067dc459739e632b87036a651269dfe2594c99d23b52032082f1\" returns successfully" Sep 4 23:52:14.091721 containerd[1500]: time="2025-09-04T23:52:14.091670092Z" level=info msg="StopPodSandbox for \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\"" Sep 4 23:52:14.091891 containerd[1500]: time="2025-09-04T23:52:14.091821667Z" level=info msg="TearDown network for sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\" successfully" Sep 4 23:52:14.091891 containerd[1500]: time="2025-09-04T23:52:14.091839581Z" level=info msg="StopPodSandbox for \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\" returns successfully" Sep 4 23:52:14.092248 containerd[1500]: time="2025-09-04T23:52:14.092155977Z" level=info msg="RemovePodSandbox for \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\"" Sep 4 23:52:14.092248 containerd[1500]: time="2025-09-04T23:52:14.092189680Z" level=info msg="Forcibly stopping sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\"" Sep 4 23:52:14.092344 containerd[1500]: time="2025-09-04T23:52:14.092280952Z" level=info msg="TearDown network for sandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\" successfully" Sep 4 23:52:14.337495 containerd[1500]: time="2025-09-04T23:52:14.337369757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:14.337495 containerd[1500]: time="2025-09-04T23:52:14.337487238Z" level=info msg="RemovePodSandbox \"b669ec06252241d109925ef3cc92545f95e1998004ea7caa7574cb2524592b87\" returns successfully" Sep 4 23:52:14.344409 containerd[1500]: time="2025-09-04T23:52:14.344244204Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\"" Sep 4 23:52:14.344532 containerd[1500]: time="2025-09-04T23:52:14.344453148Z" level=info msg="TearDown network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" successfully" Sep 4 23:52:14.344532 containerd[1500]: time="2025-09-04T23:52:14.344471643Z" level=info msg="StopPodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" returns successfully" Sep 4 23:52:14.344962 containerd[1500]: time="2025-09-04T23:52:14.344923344Z" level=info msg="RemovePodSandbox for \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\"" Sep 4 23:52:14.344962 containerd[1500]: time="2025-09-04T23:52:14.344952919Z" level=info msg="Forcibly stopping sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\"" Sep 4 23:52:14.345202 containerd[1500]: time="2025-09-04T23:52:14.345062225Z" level=info msg="TearDown network for sandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" successfully" Sep 4 23:52:14.584638 systemd[1]: run-containerd-runc-k8s.io-f434bce77922384ea8fa97ffd4a9db41743cbf7943ceef4a6cf1eeb048f313a7-runc.CVUMmH.mount: Deactivated successfully. Sep 4 23:52:14.628371 kubelet[2675]: I0904 23:52:14.628165 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-647fc84596-qpbf5" podStartSLOduration=63.168405355 podStartE2EDuration="1m14.628128229s" podCreationTimestamp="2025-09-04 23:51:00 +0000 UTC" firstStartedPulling="2025-09-04 23:52:00.733350901 +0000 UTC m=+110.711271191" lastFinishedPulling="2025-09-04 23:52:12.193073765 +0000 UTC m=+122.170994065" observedRunningTime="2025-09-04 23:52:14.627520855 +0000 UTC m=+124.605441156" watchObservedRunningTime="2025-09-04 23:52:14.628128229 +0000 UTC m=+124.606048529" Sep 4 23:52:15.506152 containerd[1500]: time="2025-09-04T23:52:15.505995560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:15.506821 containerd[1500]: time="2025-09-04T23:52:15.506449424Z" level=info msg="RemovePodSandbox \"8f63197d84b4627a24543305041c605d498d6b63ecf2e0b96424aee14d6323b5\" returns successfully" Sep 4 23:52:15.507289 containerd[1500]: time="2025-09-04T23:52:15.507199668Z" level=info msg="StopPodSandbox for \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\"" Sep 4 23:52:15.507512 containerd[1500]: time="2025-09-04T23:52:15.507405115Z" level=info msg="TearDown network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" successfully" Sep 4 23:52:15.507512 containerd[1500]: time="2025-09-04T23:52:15.507428960Z" level=info msg="StopPodSandbox for \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" returns successfully" Sep 4 23:52:15.507955 containerd[1500]: time="2025-09-04T23:52:15.507906630Z" level=info msg="RemovePodSandbox for \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\"" Sep 4 23:52:15.507955 containerd[1500]: time="2025-09-04T23:52:15.507951514Z" level=info msg="Forcibly stopping sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\"" Sep 4 23:52:15.508208 containerd[1500]: time="2025-09-04T23:52:15.508080066Z" level=info msg="TearDown network for sandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" successfully" Sep 4 23:52:15.832980 containerd[1500]: time="2025-09-04T23:52:15.832882130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:15.833193 containerd[1500]: time="2025-09-04T23:52:15.833007326Z" level=info msg="RemovePodSandbox \"7e2689109fd75564ddd37f2198f87865075149fae56052a3e725fec1f0345555\" returns successfully" Sep 4 23:52:15.833831 containerd[1500]: time="2025-09-04T23:52:15.833701213Z" level=info msg="StopPodSandbox for \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\"" Sep 4 23:52:15.834017 containerd[1500]: time="2025-09-04T23:52:15.833872616Z" level=info msg="TearDown network for sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\" successfully" Sep 4 23:52:15.834017 containerd[1500]: time="2025-09-04T23:52:15.833885961Z" level=info msg="StopPodSandbox for \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\" returns successfully" Sep 4 23:52:15.834295 containerd[1500]: time="2025-09-04T23:52:15.834266197Z" level=info msg="RemovePodSandbox for \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\"" Sep 4 23:52:15.834352 containerd[1500]: time="2025-09-04T23:52:15.834298548Z" level=info msg="Forcibly stopping sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\"" Sep 4 23:52:15.834486 containerd[1500]: time="2025-09-04T23:52:15.834430887Z" level=info msg="TearDown network for sandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\" successfully" Sep 4 23:52:15.995244 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:45094.service - OpenSSH per-connection server daemon (10.0.0.1:45094). Sep 4 23:52:16.123880 sshd[5907]: Accepted publickey for core from 10.0.0.1 port 45094 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:16.126173 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:16.132176 systemd-logind[1485]: New session 14 of user core. Sep 4 23:52:16.144352 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:52:16.187177 containerd[1500]: time="2025-09-04T23:52:16.186999518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:16.187177 containerd[1500]: time="2025-09-04T23:52:16.187157295Z" level=info msg="RemovePodSandbox \"adf60304181d81248c40be91e0dad05bfcbbc0a1a2d368c41fa9fee3ed69242f\" returns successfully" Sep 4 23:52:16.187826 containerd[1500]: time="2025-09-04T23:52:16.187792531Z" level=info msg="StopPodSandbox for \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\"" Sep 4 23:52:16.187952 containerd[1500]: time="2025-09-04T23:52:16.187929639Z" level=info msg="TearDown network for sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\" successfully" Sep 4 23:52:16.187952 containerd[1500]: time="2025-09-04T23:52:16.187946802Z" level=info msg="StopPodSandbox for \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\" returns successfully" Sep 4 23:52:16.188498 containerd[1500]: time="2025-09-04T23:52:16.188438668Z" level=info msg="RemovePodSandbox for \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\"" Sep 4 23:52:16.188498 containerd[1500]: time="2025-09-04T23:52:16.188501005Z" level=info msg="Forcibly stopping sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\"" Sep 4 23:52:16.188726 containerd[1500]: time="2025-09-04T23:52:16.188640167Z" level=info msg="TearDown network for sandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\" successfully" Sep 4 23:52:16.278476 sshd[5909]: Connection closed by 10.0.0.1 port 45094 Sep 4 23:52:16.278912 sshd-session[5907]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:16.284158 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:45094.service: Deactivated successfully. Sep 4 23:52:16.286902 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:52:16.287770 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:52:16.288808 systemd-logind[1485]: Removed session 14. Sep 4 23:52:16.571871 containerd[1500]: time="2025-09-04T23:52:16.571801895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:16.572685 containerd[1500]: time="2025-09-04T23:52:16.571899038Z" level=info msg="RemovePodSandbox \"d7e9cacbd579d5ed79a88fb44358cd4962e6c23df378f105ee74fcc30be809f4\" returns successfully" Sep 4 23:52:16.572835 containerd[1500]: time="2025-09-04T23:52:16.572803672Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\"" Sep 4 23:52:16.572971 containerd[1500]: time="2025-09-04T23:52:16.572943696Z" level=info msg="TearDown network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" successfully" Sep 4 23:52:16.572971 containerd[1500]: time="2025-09-04T23:52:16.572970035Z" level=info msg="StopPodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" returns successfully" Sep 4 23:52:16.573372 containerd[1500]: time="2025-09-04T23:52:16.573316207Z" level=info msg="RemovePodSandbox for \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\"" Sep 4 23:52:16.573372 containerd[1500]: time="2025-09-04T23:52:16.573343969Z" level=info msg="Forcibly stopping sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\"" Sep 4 23:52:16.617080 containerd[1500]: time="2025-09-04T23:52:16.573459187Z" level=info msg="TearDown network for sandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" successfully" Sep 4 23:52:16.899462 containerd[1500]: time="2025-09-04T23:52:16.899267668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:16.899462 containerd[1500]: time="2025-09-04T23:52:16.899384769Z" level=info msg="RemovePodSandbox \"fc88acbd6b78b2db81078cd16ee6a69555e771e59e0d338a2c9ca28639c46d81\" returns successfully" Sep 4 23:52:16.899970 containerd[1500]: time="2025-09-04T23:52:16.899930025Z" level=info msg="StopPodSandbox for \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\"" Sep 4 23:52:16.900158 containerd[1500]: time="2025-09-04T23:52:16.900135382Z" level=info msg="TearDown network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" successfully" Sep 4 23:52:16.900158 containerd[1500]: time="2025-09-04T23:52:16.900154929Z" level=info msg="StopPodSandbox for \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" returns successfully" Sep 4 23:52:16.900614 containerd[1500]: time="2025-09-04T23:52:16.900562507Z" level=info msg="RemovePodSandbox for \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\"" Sep 4 23:52:16.900678 containerd[1500]: time="2025-09-04T23:52:16.900608914Z" level=info msg="Forcibly stopping sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\"" Sep 4 23:52:16.900775 containerd[1500]: time="2025-09-04T23:52:16.900726756Z" level=info msg="TearDown network for sandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" successfully" Sep 4 23:52:17.123504 containerd[1500]: time="2025-09-04T23:52:17.123414133Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.123831 containerd[1500]: time="2025-09-04T23:52:17.123523319Z" level=info msg="RemovePodSandbox \"4706e9c8e89c5c408494ee99a7ac36d0c053f1876a1d5027b60de0dd26cb1c2b\" returns successfully" Sep 4 23:52:17.124280 containerd[1500]: time="2025-09-04T23:52:17.124106888Z" level=info msg="StopPodSandbox for \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\"" Sep 4 23:52:17.125424 containerd[1500]: time="2025-09-04T23:52:17.124374402Z" level=info msg="TearDown network for sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\" successfully" Sep 4 23:52:17.125424 containerd[1500]: time="2025-09-04T23:52:17.124396865Z" level=info msg="StopPodSandbox for \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\" returns successfully" Sep 4 23:52:17.125424 containerd[1500]: time="2025-09-04T23:52:17.124711707Z" level=info msg="RemovePodSandbox for \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\"" Sep 4 23:52:17.125424 containerd[1500]: time="2025-09-04T23:52:17.124741944Z" level=info msg="Forcibly stopping sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\"" Sep 4 23:52:17.125424 containerd[1500]: time="2025-09-04T23:52:17.124843836Z" level=info msg="TearDown network for sandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\" successfully" Sep 4 23:52:17.134506 containerd[1500]: time="2025-09-04T23:52:17.134405905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.134675 containerd[1500]: time="2025-09-04T23:52:17.134522314Z" level=info msg="RemovePodSandbox \"213e21ad3cc91b5e315a133bb5f679ca6232b6bb1d8f5c30d20197ee9fe9f468\" returns successfully" Sep 4 23:52:17.135114 containerd[1500]: time="2025-09-04T23:52:17.135071449Z" level=info msg="StopPodSandbox for \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\"" Sep 4 23:52:17.135285 containerd[1500]: time="2025-09-04T23:52:17.135236329Z" level=info msg="TearDown network for sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\" successfully" Sep 4 23:52:17.135285 containerd[1500]: time="2025-09-04T23:52:17.135253411Z" level=info msg="StopPodSandbox for \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\" returns successfully" Sep 4 23:52:17.135691 containerd[1500]: time="2025-09-04T23:52:17.135658413Z" level=info msg="RemovePodSandbox for \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\"" Sep 4 23:52:17.135759 containerd[1500]: time="2025-09-04T23:52:17.135691516Z" level=info msg="Forcibly stopping sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\"" Sep 4 23:52:17.135839 containerd[1500]: time="2025-09-04T23:52:17.135782908Z" level=info msg="TearDown network for sandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\" successfully" Sep 4 23:52:17.137761 containerd[1500]: time="2025-09-04T23:52:17.137679289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:17.139139 containerd[1500]: time="2025-09-04T23:52:17.139079356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 4 23:52:17.141930 containerd[1500]: time="2025-09-04T23:52:17.141808977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.141930 containerd[1500]: time="2025-09-04T23:52:17.141899968Z" level=info msg="RemovePodSandbox \"f69e7e0a12cebc9c251262217bc3f517ccd4ebd9281012392d3cdf40aa057123\" returns successfully" Sep 4 23:52:17.142839 containerd[1500]: time="2025-09-04T23:52:17.142787049Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\"" Sep 4 23:52:17.143068 containerd[1500]: time="2025-09-04T23:52:17.142931601Z" level=info msg="TearDown network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" successfully" Sep 4 23:52:17.143068 containerd[1500]: time="2025-09-04T23:52:17.142946780Z" level=info msg="StopPodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" returns successfully" Sep 4 23:52:17.143456 containerd[1500]: time="2025-09-04T23:52:17.143418839Z" level=info msg="RemovePodSandbox for \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\"" Sep 4 23:52:17.143507 containerd[1500]: time="2025-09-04T23:52:17.143465577Z" level=info msg="Forcibly stopping sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\"" Sep 4 23:52:17.143627 containerd[1500]: time="2025-09-04T23:52:17.143574432Z" level=info msg="TearDown network for sandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" successfully" Sep 4 23:52:17.144379 containerd[1500]: time="2025-09-04T23:52:17.143912649Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:17.151677 containerd[1500]: time="2025-09-04T23:52:17.151500570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:17.153458 containerd[1500]: time="2025-09-04T23:52:17.153403092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 4.959744335s" Sep 4 23:52:17.153458 containerd[1500]: time="2025-09-04T23:52:17.153447536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 4 23:52:17.155279 containerd[1500]: time="2025-09-04T23:52:17.154597712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 4 23:52:17.156254 containerd[1500]: time="2025-09-04T23:52:17.156199730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.156442 containerd[1500]: time="2025-09-04T23:52:17.156334694Z" level=info msg="RemovePodSandbox \"5ed544f69c865b6901e4fcb585e2bbdd8b0311fe74c73b92e26e078995859328\" returns successfully" Sep 4 23:52:17.156987 containerd[1500]: time="2025-09-04T23:52:17.156936247Z" level=info msg="StopPodSandbox for \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\"" Sep 4 23:52:17.156987 containerd[1500]: time="2025-09-04T23:52:17.156971383Z" level=info msg="CreateContainer within sandbox \"9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 23:52:17.157468 containerd[1500]: time="2025-09-04T23:52:17.157225200Z" level=info msg="TearDown network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" successfully" Sep 4 23:52:17.157468 containerd[1500]: time="2025-09-04T23:52:17.157241221Z" level=info msg="StopPodSandbox for \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" returns successfully" Sep 4 23:52:17.157645 containerd[1500]: time="2025-09-04T23:52:17.157609835Z" level=info msg="RemovePodSandbox for \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\"" Sep 4 23:52:17.157706 containerd[1500]: time="2025-09-04T23:52:17.157653447Z" level=info msg="Forcibly stopping sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\"" Sep 4 23:52:17.157846 containerd[1500]: time="2025-09-04T23:52:17.157780346Z" level=info msg="TearDown network for sandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" successfully" Sep 4 23:52:17.165493 containerd[1500]: time="2025-09-04T23:52:17.165393845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.165493 containerd[1500]: time="2025-09-04T23:52:17.165487461Z" level=info msg="RemovePodSandbox \"a79a8807fee682e3b3f25b3b63b581c9a072ff1e7d2fb389d1a4b6dd934d56fc\" returns successfully" Sep 4 23:52:17.166085 containerd[1500]: time="2025-09-04T23:52:17.166006660Z" level=info msg="StopPodSandbox for \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\"" Sep 4 23:52:17.166303 containerd[1500]: time="2025-09-04T23:52:17.166270777Z" level=info msg="TearDown network for sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\" successfully" Sep 4 23:52:17.166303 containerd[1500]: time="2025-09-04T23:52:17.166290844Z" level=info msg="StopPodSandbox for \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\" returns successfully" Sep 4 23:52:17.166717 containerd[1500]: time="2025-09-04T23:52:17.166691388Z" level=info msg="RemovePodSandbox for \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\"" Sep 4 23:52:17.166803 containerd[1500]: time="2025-09-04T23:52:17.166717688Z" level=info msg="Forcibly stopping sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\"" Sep 4 23:52:17.166864 containerd[1500]: time="2025-09-04T23:52:17.166806776Z" level=info msg="TearDown network for sandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\" successfully" Sep 4 23:52:17.172011 containerd[1500]: time="2025-09-04T23:52:17.171934403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.172158 containerd[1500]: time="2025-09-04T23:52:17.172019083Z" level=info msg="RemovePodSandbox \"e2075f557db2f9d025be822488c1ec0abbc4f562ccac450a4a57d3cb8bdb61d0\" returns successfully" Sep 4 23:52:17.173082 containerd[1500]: time="2025-09-04T23:52:17.172677472Z" level=info msg="StopPodSandbox for \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\"" Sep 4 23:52:17.173082 containerd[1500]: time="2025-09-04T23:52:17.172862712Z" level=info msg="TearDown network for sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\" successfully" Sep 4 23:52:17.173082 containerd[1500]: time="2025-09-04T23:52:17.172939095Z" level=info msg="StopPodSandbox for \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\" returns successfully" Sep 4 23:52:17.173453 containerd[1500]: time="2025-09-04T23:52:17.173393381Z" level=info msg="RemovePodSandbox for \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\"" Sep 4 23:52:17.173453 containerd[1500]: time="2025-09-04T23:52:17.173436602Z" level=info msg="Forcibly stopping sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\"" Sep 4 23:52:17.173931 containerd[1500]: time="2025-09-04T23:52:17.173540407Z" level=info msg="TearDown network for sandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\" successfully" Sep 4 23:52:17.185309 containerd[1500]: time="2025-09-04T23:52:17.184907195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.185309 containerd[1500]: time="2025-09-04T23:52:17.185010660Z" level=info msg="RemovePodSandbox \"ab9db926c41a213043c62f36876a57c27cf60280a9b94830af512232212d8add\" returns successfully" Sep 4 23:52:17.185998 containerd[1500]: time="2025-09-04T23:52:17.185963765Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\"" Sep 4 23:52:17.186228 containerd[1500]: time="2025-09-04T23:52:17.186132763Z" level=info msg="TearDown network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" successfully" Sep 4 23:52:17.186228 containerd[1500]: time="2025-09-04T23:52:17.186152691Z" level=info msg="StopPodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" returns successfully" Sep 4 23:52:17.186456 containerd[1500]: time="2025-09-04T23:52:17.186430484Z" level=info msg="RemovePodSandbox for \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\"" Sep 4 23:52:17.186510 containerd[1500]: time="2025-09-04T23:52:17.186455872Z" level=info msg="Forcibly stopping sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\"" Sep 4 23:52:17.186607 containerd[1500]: time="2025-09-04T23:52:17.186555139Z" level=info msg="TearDown network for sandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" successfully" Sep 4 23:52:17.196115 containerd[1500]: time="2025-09-04T23:52:17.196061031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.196310 containerd[1500]: time="2025-09-04T23:52:17.196161300Z" level=info msg="RemovePodSandbox \"58cdd4f6b2d5d13839917c2020d18474005e60bef1d2951f29f933d54637c97d\" returns successfully" Sep 4 23:52:17.196788 containerd[1500]: time="2025-09-04T23:52:17.196735562Z" level=info msg="StopPodSandbox for \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\"" Sep 4 23:52:17.196983 containerd[1500]: time="2025-09-04T23:52:17.196938543Z" level=info msg="TearDown network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" successfully" Sep 4 23:52:17.196983 containerd[1500]: time="2025-09-04T23:52:17.196958882Z" level=info msg="StopPodSandbox for \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" returns successfully" Sep 4 23:52:17.197463 containerd[1500]: time="2025-09-04T23:52:17.197418507Z" level=info msg="RemovePodSandbox for \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\"" Sep 4 23:52:17.197463 containerd[1500]: time="2025-09-04T23:52:17.197458543Z" level=info msg="Forcibly stopping sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\"" Sep 4 23:52:17.197749 containerd[1500]: time="2025-09-04T23:52:17.197570494Z" level=info msg="TearDown network for sandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" successfully" Sep 4 23:52:17.198768 containerd[1500]: time="2025-09-04T23:52:17.198615211Z" level=info msg="CreateContainer within sandbox \"9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8cc8d7410e30d3f56cd10562717b1839c9465c78eef2a5f140fe61d4a659cd0c\"" Sep 4 23:52:17.199718 containerd[1500]: time="2025-09-04T23:52:17.199691258Z" level=info msg="StartContainer for \"8cc8d7410e30d3f56cd10562717b1839c9465c78eef2a5f140fe61d4a659cd0c\"" Sep 4 23:52:17.203513 containerd[1500]: time="2025-09-04T23:52:17.203463783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.203598 containerd[1500]: time="2025-09-04T23:52:17.203534125Z" level=info msg="RemovePodSandbox \"a58c8fc8709d39f49212af8e49753d22f633fecf3e94725896bdf27358425501\" returns successfully" Sep 4 23:52:17.204003 containerd[1500]: time="2025-09-04T23:52:17.203976098Z" level=info msg="StopPodSandbox for \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\"" Sep 4 23:52:17.204167 containerd[1500]: time="2025-09-04T23:52:17.204140698Z" level=info msg="TearDown network for sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\" successfully" Sep 4 23:52:17.204167 containerd[1500]: time="2025-09-04T23:52:17.204162299Z" level=info msg="StopPodSandbox for \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\" returns successfully" Sep 4 23:52:17.204570 containerd[1500]: time="2025-09-04T23:52:17.204531374Z" level=info msg="RemovePodSandbox for \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\"" Sep 4 23:52:17.204637 containerd[1500]: time="2025-09-04T23:52:17.204587940Z" level=info msg="Forcibly stopping sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\"" Sep 4 23:52:17.204775 containerd[1500]: time="2025-09-04T23:52:17.204720389Z" level=info msg="TearDown network for sandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\" successfully" Sep 4 23:52:17.211595 containerd[1500]: time="2025-09-04T23:52:17.211515026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.212222 containerd[1500]: time="2025-09-04T23:52:17.211617940Z" level=info msg="RemovePodSandbox \"057948efc2fc3674f6cbbb5f8c45c742e2b18df3520fc8cedbe8c0e18c2b0d56\" returns successfully" Sep 4 23:52:17.212718 containerd[1500]: time="2025-09-04T23:52:17.212660934Z" level=info msg="StopPodSandbox for \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\"" Sep 4 23:52:17.212839 containerd[1500]: time="2025-09-04T23:52:17.212813792Z" level=info msg="TearDown network for sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\" successfully" Sep 4 23:52:17.212839 containerd[1500]: time="2025-09-04T23:52:17.212831365Z" level=info msg="StopPodSandbox for \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\" returns successfully" Sep 4 23:52:17.213207 containerd[1500]: time="2025-09-04T23:52:17.213169292Z" level=info msg="RemovePodSandbox for \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\"" Sep 4 23:52:17.213293 containerd[1500]: time="2025-09-04T23:52:17.213233101Z" level=info msg="Forcibly stopping sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\"" Sep 4 23:52:17.213437 containerd[1500]: time="2025-09-04T23:52:17.213375059Z" level=info msg="TearDown network for sandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\" successfully" Sep 4 23:52:17.221494 containerd[1500]: time="2025-09-04T23:52:17.221253446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.221494 containerd[1500]: time="2025-09-04T23:52:17.221346923Z" level=info msg="RemovePodSandbox \"1f57545e3260721aa2c0cc6c933c2e1fe8ffb0a5c922dc304b759c0b3586cd94\" returns successfully" Sep 4 23:52:17.222524 containerd[1500]: time="2025-09-04T23:52:17.222220247Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\"" Sep 4 23:52:17.222524 containerd[1500]: time="2025-09-04T23:52:17.222421026Z" level=info msg="TearDown network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" successfully" Sep 4 23:52:17.222524 containerd[1500]: time="2025-09-04T23:52:17.222441474Z" level=info msg="StopPodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" returns successfully" Sep 4 23:52:17.226092 containerd[1500]: time="2025-09-04T23:52:17.223515016Z" level=info msg="RemovePodSandbox for \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\"" Sep 4 23:52:17.226092 containerd[1500]: time="2025-09-04T23:52:17.223547467Z" level=info msg="Forcibly stopping sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\"" Sep 4 23:52:17.226092 containerd[1500]: time="2025-09-04T23:52:17.223662002Z" level=info msg="TearDown network for sandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" successfully" Sep 4 23:52:17.229617 containerd[1500]: time="2025-09-04T23:52:17.229560701Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.229725 containerd[1500]: time="2025-09-04T23:52:17.229644990Z" level=info msg="RemovePodSandbox \"48d2935254060cea4375b65bdb0f8bbfb05d66dd3d9bbea120af2132a1010562\" returns successfully" Sep 4 23:52:17.230233 containerd[1500]: time="2025-09-04T23:52:17.230205536Z" level=info msg="StopPodSandbox for \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\"" Sep 4 23:52:17.230357 containerd[1500]: time="2025-09-04T23:52:17.230323719Z" level=info msg="TearDown network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" successfully" Sep 4 23:52:17.230357 containerd[1500]: time="2025-09-04T23:52:17.230343596Z" level=info msg="StopPodSandbox for \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" returns successfully" Sep 4 23:52:17.231018 containerd[1500]: time="2025-09-04T23:52:17.230985355Z" level=info msg="RemovePodSandbox for \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\"" Sep 4 23:52:17.231102 containerd[1500]: time="2025-09-04T23:52:17.231070635Z" level=info msg="Forcibly stopping sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\"" Sep 4 23:52:17.231862 containerd[1500]: time="2025-09-04T23:52:17.231169011Z" level=info msg="TearDown network for sandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" successfully" Sep 4 23:52:17.237802 containerd[1500]: time="2025-09-04T23:52:17.237716561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.237932 containerd[1500]: time="2025-09-04T23:52:17.237832570Z" level=info msg="RemovePodSandbox \"5137ecb400e7139add3e0a0b0c3bba46e3c06327e24de279603f1ea209f5e04c\" returns successfully" Sep 4 23:52:17.238503 containerd[1500]: time="2025-09-04T23:52:17.238364512Z" level=info msg="StopPodSandbox for \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\"" Sep 4 23:52:17.238673 containerd[1500]: time="2025-09-04T23:52:17.238642956Z" level=info msg="TearDown network for sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\" successfully" Sep 4 23:52:17.238673 containerd[1500]: time="2025-09-04T23:52:17.238666661Z" level=info msg="StopPodSandbox for \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\" returns successfully" Sep 4 23:52:17.239147 containerd[1500]: time="2025-09-04T23:52:17.239123081Z" level=info msg="RemovePodSandbox for \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\"" Sep 4 23:52:17.239303 containerd[1500]: time="2025-09-04T23:52:17.239275397Z" level=info msg="Forcibly stopping sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\"" Sep 4 23:52:17.239435 containerd[1500]: time="2025-09-04T23:52:17.239382068Z" level=info msg="TearDown network for sandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\" successfully" Sep 4 23:52:17.248535 containerd[1500]: time="2025-09-04T23:52:17.248441621Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.248747 containerd[1500]: time="2025-09-04T23:52:17.248564292Z" level=info msg="RemovePodSandbox \"f32d1ac7da52dd50bb74ca12d7deeaefb5e5e174cc3992fe50fae82a601e742c\" returns successfully" Sep 4 23:52:17.249528 systemd[1]: Started cri-containerd-8cc8d7410e30d3f56cd10562717b1839c9465c78eef2a5f140fe61d4a659cd0c.scope - libcontainer container 8cc8d7410e30d3f56cd10562717b1839c9465c78eef2a5f140fe61d4a659cd0c. Sep 4 23:52:17.250179 containerd[1500]: time="2025-09-04T23:52:17.250106446Z" level=info msg="StopPodSandbox for \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\"" Sep 4 23:52:17.250516 containerd[1500]: time="2025-09-04T23:52:17.250488596Z" level=info msg="TearDown network for sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\" successfully" Sep 4 23:52:17.250516 containerd[1500]: time="2025-09-04T23:52:17.250514194Z" level=info msg="StopPodSandbox for \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\" returns successfully" Sep 4 23:52:17.251267 containerd[1500]: time="2025-09-04T23:52:17.251197120Z" level=info msg="RemovePodSandbox for \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\"" Sep 4 23:52:17.251364 containerd[1500]: time="2025-09-04T23:52:17.251266551Z" level=info msg="Forcibly stopping sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\"" Sep 4 23:52:17.251478 containerd[1500]: time="2025-09-04T23:52:17.251418377Z" level=info msg="TearDown network for sandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\" successfully" Sep 4 23:52:17.259463 containerd[1500]: time="2025-09-04T23:52:17.259371175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:52:17.259617 containerd[1500]: time="2025-09-04T23:52:17.259482564Z" level=info msg="RemovePodSandbox \"8c4992f6140ca9aaf5e83c85d4ae458a7d8b04015cbdba2b6ee31235540154fb\" returns successfully" Sep 4 23:52:17.348788 containerd[1500]: time="2025-09-04T23:52:17.348674411Z" level=info msg="StartContainer for \"8cc8d7410e30d3f56cd10562717b1839c9465c78eef2a5f140fe61d4a659cd0c\" returns successfully" Sep 4 23:52:18.820103 containerd[1500]: time="2025-09-04T23:52:18.819881945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:18.821120 containerd[1500]: time="2025-09-04T23:52:18.821072447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 4 23:52:18.822954 containerd[1500]: time="2025-09-04T23:52:18.822902443Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:18.837864 containerd[1500]: time="2025-09-04T23:52:18.837777627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:18.838652 containerd[1500]: time="2025-09-04T23:52:18.838612078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.683978718s" Sep 4 23:52:18.838736 containerd[1500]: time="2025-09-04T23:52:18.838652704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 4 23:52:18.840215 containerd[1500]: time="2025-09-04T23:52:18.840160725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 23:52:18.841895 containerd[1500]: time="2025-09-04T23:52:18.841825661Z" level=info msg="CreateContainer within sandbox \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 4 23:52:18.862859 containerd[1500]: time="2025-09-04T23:52:18.862769683Z" level=info msg="CreateContainer within sandbox \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4\"" Sep 4 23:52:18.863700 containerd[1500]: time="2025-09-04T23:52:18.863667233Z" level=info msg="StartContainer for \"9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4\"" Sep 4 23:52:18.910496 systemd[1]: Started cri-containerd-9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4.scope - libcontainer container 9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4. Sep 4 23:52:18.962401 containerd[1500]: time="2025-09-04T23:52:18.962358085Z" level=info msg="StartContainer for \"9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4\" returns successfully" Sep 4 23:52:20.126867 kubelet[2675]: E0904 23:52:20.126800 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:20.946612 containerd[1500]: time="2025-09-04T23:52:20.946522854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:20.948373 containerd[1500]: time="2025-09-04T23:52:20.948298839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 4 23:52:20.950063 containerd[1500]: time="2025-09-04T23:52:20.950011354Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:20.953356 containerd[1500]: time="2025-09-04T23:52:20.953300608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:20.954160 containerd[1500]: time="2025-09-04T23:52:20.954098410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.113901637s" Sep 4 23:52:20.954160 containerd[1500]: time="2025-09-04T23:52:20.954128958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 23:52:20.955609 containerd[1500]: time="2025-09-04T23:52:20.955258976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 4 23:52:20.957809 containerd[1500]: time="2025-09-04T23:52:20.957750287Z" level=info msg="CreateContainer within sandbox \"c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 23:52:20.973919 containerd[1500]: time="2025-09-04T23:52:20.973835559Z" level=info msg="CreateContainer within sandbox \"c9d9a02c01eb8226f910ca2bcdb3f1826c718d1584d27cfe26b4424bc526fb63\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4bad4cf5fa67560573479d451d9e2d7ec0465bcf36ea0a50e867be980bc2246f\"" Sep 4 23:52:20.976269 containerd[1500]: time="2025-09-04T23:52:20.974730574Z" level=info msg="StartContainer for \"4bad4cf5fa67560573479d451d9e2d7ec0465bcf36ea0a50e867be980bc2246f\"" Sep 4 23:52:21.056383 systemd[1]: Started cri-containerd-4bad4cf5fa67560573479d451d9e2d7ec0465bcf36ea0a50e867be980bc2246f.scope - libcontainer container 4bad4cf5fa67560573479d451d9e2d7ec0465bcf36ea0a50e867be980bc2246f. Sep 4 23:52:21.293736 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:35626.service - OpenSSH per-connection server daemon (10.0.0.1:35626). Sep 4 23:52:21.306223 containerd[1500]: time="2025-09-04T23:52:21.306164502Z" level=info msg="StartContainer for \"4bad4cf5fa67560573479d451d9e2d7ec0465bcf36ea0a50e867be980bc2246f\" returns successfully" Sep 4 23:52:21.384578 sshd[6063]: Accepted publickey for core from 10.0.0.1 port 35626 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:21.386992 sshd-session[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:21.394159 systemd-logind[1485]: New session 15 of user core. Sep 4 23:52:21.402349 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:52:21.622459 containerd[1500]: time="2025-09-04T23:52:21.622262030Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:21.626026 containerd[1500]: time="2025-09-04T23:52:21.623937134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 4 23:52:21.626026 containerd[1500]: time="2025-09-04T23:52:21.625902956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 670.567998ms" Sep 4 23:52:21.626026 containerd[1500]: time="2025-09-04T23:52:21.625928324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 4 23:52:21.631398 containerd[1500]: time="2025-09-04T23:52:21.631329405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 4 23:52:21.642768 containerd[1500]: time="2025-09-04T23:52:21.642166462Z" level=info msg="CreateContainer within sandbox \"a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 23:52:21.688173 sshd[6065]: Connection closed by 10.0.0.1 port 35626 Sep 4 23:52:21.688671 sshd-session[6063]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:21.705393 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:35626.service: Deactivated successfully. Sep 4 23:52:21.708363 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:52:21.710856 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:52:21.714881 containerd[1500]: time="2025-09-04T23:52:21.714808666Z" level=info msg="CreateContainer within sandbox \"a209a43fe9543173516d3b2c8394ae0b394a543347d2531b38876c8da15bebe4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"80d55afb7285d9b0dc5f52960ba71d270e5ab6bf0279e6e2d7e7e03213376931\"" Sep 4 23:52:21.715547 containerd[1500]: time="2025-09-04T23:52:21.715519685Z" level=info msg="StartContainer for \"80d55afb7285d9b0dc5f52960ba71d270e5ab6bf0279e6e2d7e7e03213376931\"" Sep 4 23:52:21.718512 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:35632.service - OpenSSH per-connection server daemon (10.0.0.1:35632). Sep 4 23:52:21.723908 systemd-logind[1485]: Removed session 15. Sep 4 23:52:21.755260 systemd[1]: Started cri-containerd-80d55afb7285d9b0dc5f52960ba71d270e5ab6bf0279e6e2d7e7e03213376931.scope - libcontainer container 80d55afb7285d9b0dc5f52960ba71d270e5ab6bf0279e6e2d7e7e03213376931. Sep 4 23:52:21.768936 sshd[6082]: Accepted publickey for core from 10.0.0.1 port 35632 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:21.769929 sshd-session[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:21.777314 systemd-logind[1485]: New session 16 of user core. Sep 4 23:52:21.785202 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:52:21.812349 containerd[1500]: time="2025-09-04T23:52:21.812280877Z" level=info msg="StartContainer for \"80d55afb7285d9b0dc5f52960ba71d270e5ab6bf0279e6e2d7e7e03213376931\" returns successfully" Sep 4 23:52:22.113108 sshd[6109]: Connection closed by 10.0.0.1 port 35632 Sep 4 23:52:22.115538 sshd-session[6082]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:22.130870 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:35632.service: Deactivated successfully. Sep 4 23:52:22.136117 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:52:22.142242 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:52:22.156544 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:35648.service - OpenSSH per-connection server daemon (10.0.0.1:35648). Sep 4 23:52:22.158494 systemd-logind[1485]: Removed session 16. Sep 4 23:52:22.296454 sshd[6136]: Accepted publickey for core from 10.0.0.1 port 35648 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:22.298523 sshd-session[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:22.304912 systemd-logind[1485]: New session 17 of user core. Sep 4 23:52:22.322418 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:52:22.495662 sshd[6139]: Connection closed by 10.0.0.1 port 35648 Sep 4 23:52:22.496784 sshd-session[6136]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:22.504845 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:35648.service: Deactivated successfully. Sep 4 23:52:22.509077 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:52:22.510746 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:52:22.513938 systemd-logind[1485]: Removed session 17. Sep 4 23:52:22.636875 kubelet[2675]: I0904 23:52:22.636775 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f5f7b99c-t4zg6" podStartSLOduration=69.698355684 podStartE2EDuration="1m28.636744974s" podCreationTimestamp="2025-09-04 23:50:54 +0000 UTC" firstStartedPulling="2025-09-04 23:52:02.016614055 +0000 UTC m=+111.994534355" lastFinishedPulling="2025-09-04 23:52:20.955003345 +0000 UTC m=+130.932923645" observedRunningTime="2025-09-04 23:52:21.619126646 +0000 UTC m=+131.597046966" watchObservedRunningTime="2025-09-04 23:52:22.636744974 +0000 UTC m=+132.614665274" Sep 4 23:52:22.641299 kubelet[2675]: I0904 23:52:22.641187 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f5f7b99c-lbsxp" podStartSLOduration=69.194277808 podStartE2EDuration="1m28.637005495s" podCreationTimestamp="2025-09-04 23:50:54 +0000 UTC" firstStartedPulling="2025-09-04 23:52:02.187877804 +0000 UTC m=+112.165798104" lastFinishedPulling="2025-09-04 23:52:21.630605491 +0000 UTC m=+131.608525791" observedRunningTime="2025-09-04 23:52:22.636123314 +0000 UTC m=+132.614043624" watchObservedRunningTime="2025-09-04 23:52:22.637005495 +0000 UTC m=+132.614925795" Sep 4 23:52:23.390544 systemd[1]: run-containerd-runc-k8s.io-f434bce77922384ea8fa97ffd4a9db41743cbf7943ceef4a6cf1eeb048f313a7-runc.4cI3lT.mount: Deactivated successfully. Sep 4 23:52:23.431941 systemd[1]: run-containerd-runc-k8s.io-a45898d5b3836b2614b63c3627b728d59b625764bebf0b479ad477614d3e9b24-runc.z1xIgp.mount: Deactivated successfully. Sep 4 23:52:24.615150 kubelet[2675]: I0904 23:52:24.615087 2675 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 23:52:25.328748 containerd[1500]: time="2025-09-04T23:52:25.328664096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:25.334466 containerd[1500]: time="2025-09-04T23:52:25.333751977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 4 23:52:25.336095 containerd[1500]: time="2025-09-04T23:52:25.335676992Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:25.345810 containerd[1500]: time="2025-09-04T23:52:25.345697850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:25.347577 containerd[1500]: time="2025-09-04T23:52:25.347438868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.715263931s" Sep 4 23:52:25.347577 containerd[1500]: time="2025-09-04T23:52:25.347506506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 4 23:52:25.353206 containerd[1500]: time="2025-09-04T23:52:25.351518751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 4 23:52:25.356505 containerd[1500]: time="2025-09-04T23:52:25.356326974Z" level=info msg="CreateContainer within sandbox \"9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 23:52:25.428264 containerd[1500]: time="2025-09-04T23:52:25.428170961Z" level=info msg="CreateContainer within sandbox \"9562827b8342bb713daa85255b7ed419234b333177b450b5197904b1ff2947cc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b2bb74b493b58f0d1ea34b68ec08054bd17e233dda83b9aca52351ed55b387bd\"" Sep 4 23:52:25.429560 containerd[1500]: time="2025-09-04T23:52:25.429347967Z" level=info msg="StartContainer for \"b2bb74b493b58f0d1ea34b68ec08054bd17e233dda83b9aca52351ed55b387bd\"" Sep 4 23:52:25.511979 systemd[1]: Started cri-containerd-b2bb74b493b58f0d1ea34b68ec08054bd17e233dda83b9aca52351ed55b387bd.scope - libcontainer container b2bb74b493b58f0d1ea34b68ec08054bd17e233dda83b9aca52351ed55b387bd. Sep 4 23:52:25.644476 containerd[1500]: time="2025-09-04T23:52:25.644230586Z" level=info msg="StartContainer for \"b2bb74b493b58f0d1ea34b68ec08054bd17e233dda83b9aca52351ed55b387bd\" returns successfully" Sep 4 23:52:26.297598 kubelet[2675]: I0904 23:52:26.290643 2675 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 23:52:26.297598 kubelet[2675]: I0904 23:52:26.290732 2675 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 23:52:26.944958 kubelet[2675]: I0904 23:52:26.944591 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t8wcm" podStartSLOduration=65.159113596 podStartE2EDuration="1m28.944551827s" podCreationTimestamp="2025-09-04 23:50:58 +0000 UTC" firstStartedPulling="2025-09-04 23:52:01.563998078 +0000 UTC m=+111.541918378" lastFinishedPulling="2025-09-04 23:52:25.349436299 +0000 UTC m=+135.327356609" observedRunningTime="2025-09-04 23:52:26.937995912 +0000 UTC m=+136.915916212" watchObservedRunningTime="2025-09-04 23:52:26.944551827 +0000 UTC m=+136.922472137" Sep 4 23:52:27.531227 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:35652.service - OpenSSH per-connection server daemon (10.0.0.1:35652). Sep 4 23:52:27.633425 sshd[6268]: Accepted publickey for core from 10.0.0.1 port 35652 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:27.637210 sshd-session[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:27.646693 systemd-logind[1485]: New session 18 of user core. Sep 4 23:52:27.661416 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:52:28.398464 sshd[6270]: Connection closed by 10.0.0.1 port 35652 Sep 4 23:52:28.399549 sshd-session[6268]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:28.412212 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:35652.service: Deactivated successfully. Sep 4 23:52:28.419711 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:52:28.432672 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:52:28.440332 systemd-logind[1485]: Removed session 18. Sep 4 23:52:30.505504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102788716.mount: Deactivated successfully. Sep 4 23:52:30.544199 containerd[1500]: time="2025-09-04T23:52:30.543978917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:30.545421 containerd[1500]: time="2025-09-04T23:52:30.545365558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 4 23:52:30.548180 containerd[1500]: time="2025-09-04T23:52:30.548098393Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:30.552536 containerd[1500]: time="2025-09-04T23:52:30.552467789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:52:30.553626 containerd[1500]: time="2025-09-04T23:52:30.553559795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.201978587s" Sep 4 23:52:30.553626 containerd[1500]: time="2025-09-04T23:52:30.553598879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 4 23:52:30.558644 containerd[1500]: time="2025-09-04T23:52:30.558478716Z" level=info msg="CreateContainer within sandbox \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 4 23:52:30.594804 containerd[1500]: time="2025-09-04T23:52:30.594705772Z" level=info msg="CreateContainer within sandbox \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a\"" Sep 4 23:52:30.595997 containerd[1500]: time="2025-09-04T23:52:30.595948331Z" level=info msg="StartContainer for \"fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a\"" Sep 4 23:52:30.675698 systemd[1]: Started cri-containerd-fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a.scope - libcontainer container fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a. Sep 4 23:52:30.743061 containerd[1500]: time="2025-09-04T23:52:30.742952490Z" level=info msg="StartContainer for \"fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a\" returns successfully" Sep 4 23:52:31.715852 containerd[1500]: time="2025-09-04T23:52:31.715773952Z" level=info msg="StopContainer for \"fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a\" with timeout 30 (s)" Sep 4 23:52:31.744610 containerd[1500]: time="2025-09-04T23:52:31.744535831Z" level=info msg="Stop container \"fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a\" with signal terminated" Sep 4 23:52:31.750292 containerd[1500]: time="2025-09-04T23:52:31.750183174Z" level=info msg="StopContainer for \"9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4\" with timeout 30 (s)" Sep 4 23:52:31.750966 containerd[1500]: time="2025-09-04T23:52:31.750832897Z" level=info msg="Stop container \"9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4\" with signal terminated" Sep 4 23:52:31.762659 systemd[1]: cri-containerd-fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a.scope: Deactivated successfully. Sep 4 23:52:31.778582 systemd[1]: cri-containerd-9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4.scope: Deactivated successfully. Sep 4 23:52:31.779317 systemd[1]: cri-containerd-9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4.scope: Consumed 67ms CPU time, 7.6M memory peak, 2.3M read from disk, 12K written to disk. Sep 4 23:52:31.824409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a-rootfs.mount: Deactivated successfully. Sep 4 23:52:31.834275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4-rootfs.mount: Deactivated successfully. Sep 4 23:52:32.262189 containerd[1500]: time="2025-09-04T23:52:32.252416165Z" level=info msg="shim disconnected" id=9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4 namespace=k8s.io Sep 4 23:52:32.262557 containerd[1500]: time="2025-09-04T23:52:32.262203562Z" level=warning msg="cleaning up after shim disconnected" id=9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4 namespace=k8s.io Sep 4 23:52:32.262557 containerd[1500]: time="2025-09-04T23:52:32.262227226Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:32.262557 containerd[1500]: time="2025-09-04T23:52:32.252417828Z" level=info msg="shim disconnected" id=fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a namespace=k8s.io Sep 4 23:52:32.262557 containerd[1500]: time="2025-09-04T23:52:32.262376898Z" level=warning msg="cleaning up after shim disconnected" id=fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a namespace=k8s.io Sep 4 23:52:32.262557 containerd[1500]: time="2025-09-04T23:52:32.262394240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:32.344193 containerd[1500]: time="2025-09-04T23:52:32.344003282Z" level=info msg="StopContainer for \"9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4\" returns successfully" Sep 4 23:52:32.356725 containerd[1500]: time="2025-09-04T23:52:32.356178755Z" level=info msg="StopContainer for \"fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a\" returns successfully" Sep 4 23:52:32.359491 containerd[1500]: time="2025-09-04T23:52:32.357641057Z" level=info msg="StopPodSandbox for \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\"" Sep 4 23:52:32.377547 containerd[1500]: time="2025-09-04T23:52:32.377406870Z" level=info msg="Container to stop \"fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:52:32.377547 containerd[1500]: time="2025-09-04T23:52:32.377535142Z" level=info msg="Container to stop \"9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:52:32.382253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410-shm.mount: Deactivated successfully. Sep 4 23:52:32.390022 systemd[1]: cri-containerd-a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410.scope: Deactivated successfully. Sep 4 23:52:32.469104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410-rootfs.mount: Deactivated successfully. Sep 4 23:52:32.470785 containerd[1500]: time="2025-09-04T23:52:32.469105296Z" level=info msg="shim disconnected" id=a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410 namespace=k8s.io Sep 4 23:52:32.470785 containerd[1500]: time="2025-09-04T23:52:32.469191168Z" level=warning msg="cleaning up after shim disconnected" id=a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410 namespace=k8s.io Sep 4 23:52:32.470785 containerd[1500]: time="2025-09-04T23:52:32.469202590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:52:32.724144 kubelet[2675]: I0904 23:52:32.722671 2675 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:52:33.104862 kubelet[2675]: I0904 23:52:33.103758 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57d55db56d-ts945" podStartSLOduration=57.251435417 podStartE2EDuration="1m26.103728901s" podCreationTimestamp="2025-09-04 23:51:07 +0000 UTC" firstStartedPulling="2025-09-04 23:52:01.702680423 +0000 UTC m=+111.680600723" lastFinishedPulling="2025-09-04 23:52:30.554973907 +0000 UTC m=+140.532894207" observedRunningTime="2025-09-04 23:52:31.729861915 +0000 UTC m=+141.707782225" watchObservedRunningTime="2025-09-04 23:52:33.103728901 +0000 UTC m=+143.081649201" Sep 4 23:52:33.115588 systemd-networkd[1431]: calie1222bc24db: Link DOWN Sep 4 23:52:33.115600 systemd-networkd[1431]: calie1222bc24db: Lost carrier Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.104 [INFO][6440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.109 [INFO][6440] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" iface="eth0" netns="/var/run/netns/cni-9c8624d5-4abb-a956-05a7-737f25259e85" Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.110 [INFO][6440] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" iface="eth0" netns="/var/run/netns/cni-9c8624d5-4abb-a956-05a7-737f25259e85" Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.128 [INFO][6440] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" after=18.65478ms iface="eth0" netns="/var/run/netns/cni-9c8624d5-4abb-a956-05a7-737f25259e85" Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.128 [INFO][6440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.128 [INFO][6440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.199 [INFO][6454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.200 [INFO][6454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.200 [INFO][6454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.320 [INFO][6454] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.320 [INFO][6454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.327 [INFO][6454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:52:33.341105 containerd[1500]: 2025-09-04 23:52:33.336 [INFO][6440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:52:33.344058 containerd[1500]: time="2025-09-04T23:52:33.341460730Z" level=info msg="TearDown network for sandbox \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" successfully" Sep 4 23:52:33.344058 containerd[1500]: time="2025-09-04T23:52:33.341583471Z" level=info msg="StopPodSandbox for \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" returns successfully" Sep 4 23:52:33.350215 systemd[1]: run-netns-cni\x2d9c8624d5\x2d4abb\x2da956\x2d05a7\x2d737f25259e85.mount: Deactivated successfully. Sep 4 23:52:33.436769 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:35666.service - OpenSSH per-connection server daemon (10.0.0.1:35666). Sep 4 23:52:33.588800 kubelet[2675]: I0904 23:52:33.588665 2675 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmtkb\" (UniqueName: \"kubernetes.io/projected/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-kube-api-access-cmtkb\") pod \"210f206c-d2e9-4f37-8e9c-bb39c462e3a5\" (UID: \"210f206c-d2e9-4f37-8e9c-bb39c462e3a5\") " Sep 4 23:52:33.600315 kubelet[2675]: I0904 23:52:33.599721 2675 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-whisker-ca-bundle\") pod \"210f206c-d2e9-4f37-8e9c-bb39c462e3a5\" (UID: \"210f206c-d2e9-4f37-8e9c-bb39c462e3a5\") " Sep 4 23:52:33.600315 kubelet[2675]: I0904 23:52:33.599807 2675 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-whisker-backend-key-pair\") pod \"210f206c-d2e9-4f37-8e9c-bb39c462e3a5\" (UID: \"210f206c-d2e9-4f37-8e9c-bb39c462e3a5\") " Sep 4 23:52:33.605649 kubelet[2675]: I0904 23:52:33.605588 2675 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "210f206c-d2e9-4f37-8e9c-bb39c462e3a5" (UID: "210f206c-d2e9-4f37-8e9c-bb39c462e3a5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:52:33.622813 systemd[1]: var-lib-kubelet-pods-210f206c\x2dd2e9\x2d4f37\x2d8e9c\x2dbb39c462e3a5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 4 23:52:33.630433 systemd[1]: var-lib-kubelet-pods-210f206c\x2dd2e9\x2d4f37\x2d8e9c\x2dbb39c462e3a5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcmtkb.mount: Deactivated successfully. Sep 4 23:52:33.630583 sshd[6465]: Accepted publickey for core from 10.0.0.1 port 35666 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:33.631213 kubelet[2675]: I0904 23:52:33.630982 2675 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "210f206c-d2e9-4f37-8e9c-bb39c462e3a5" (UID: "210f206c-d2e9-4f37-8e9c-bb39c462e3a5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:52:33.633774 sshd-session[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:33.636117 kubelet[2675]: I0904 23:52:33.635987 2675 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-kube-api-access-cmtkb" (OuterVolumeSpecName: "kube-api-access-cmtkb") pod "210f206c-d2e9-4f37-8e9c-bb39c462e3a5" (UID: "210f206c-d2e9-4f37-8e9c-bb39c462e3a5"). InnerVolumeSpecName "kube-api-access-cmtkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:52:33.641332 systemd-logind[1485]: New session 19 of user core. Sep 4 23:52:33.650401 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:52:33.701219 kubelet[2675]: I0904 23:52:33.700941 2675 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cmtkb\" (UniqueName: \"kubernetes.io/projected/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-kube-api-access-cmtkb\") on node \"localhost\" DevicePath \"\"" Sep 4 23:52:33.701219 kubelet[2675]: I0904 23:52:33.700993 2675 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 4 23:52:33.701219 kubelet[2675]: I0904 23:52:33.701007 2675 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/210f206c-d2e9-4f37-8e9c-bb39c462e3a5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 4 23:52:33.763992 systemd[1]: Removed slice kubepods-besteffort-pod210f206c_d2e9_4f37_8e9c_bb39c462e3a5.slice - libcontainer container kubepods-besteffort-pod210f206c_d2e9_4f37_8e9c_bb39c462e3a5.slice. Sep 4 23:52:33.764167 systemd[1]: kubepods-besteffort-pod210f206c_d2e9_4f37_8e9c_bb39c462e3a5.slice: Consumed 208ms CPU time, 16.7M memory peak, 2.3M read from disk, 12K written to disk. Sep 4 23:52:34.130915 kubelet[2675]: I0904 23:52:34.130836 2675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210f206c-d2e9-4f37-8e9c-bb39c462e3a5" path="/var/lib/kubelet/pods/210f206c-d2e9-4f37-8e9c-bb39c462e3a5/volumes" Sep 4 23:52:35.884393 sshd[6471]: Connection closed by 10.0.0.1 port 35666 Sep 4 23:52:35.885138 sshd-session[6465]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:35.890649 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:35666.service: Deactivated successfully. Sep 4 23:52:35.893446 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:52:35.894434 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:52:35.895678 systemd-logind[1485]: Removed session 19. Sep 4 23:52:40.912642 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:50706.service - OpenSSH per-connection server daemon (10.0.0.1:50706). Sep 4 23:52:40.962998 sshd[6520]: Accepted publickey for core from 10.0.0.1 port 50706 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:40.965261 sshd-session[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:40.971766 systemd-logind[1485]: New session 20 of user core. Sep 4 23:52:40.978378 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:52:41.172568 sshd[6522]: Connection closed by 10.0.0.1 port 50706 Sep 4 23:52:41.172896 sshd-session[6520]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:41.178136 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:50706.service: Deactivated successfully. Sep 4 23:52:41.181965 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:52:41.183689 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:52:41.185123 systemd-logind[1485]: Removed session 20. Sep 4 23:52:45.120544 kubelet[2675]: E0904 23:52:45.120481 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:46.186892 systemd[1]: Started sshd@21-10.0.0.65:22-10.0.0.1:50714.service - OpenSSH per-connection server daemon (10.0.0.1:50714). Sep 4 23:52:46.231749 sshd[6557]: Accepted publickey for core from 10.0.0.1 port 50714 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:46.233670 sshd-session[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:46.238983 systemd-logind[1485]: New session 21 of user core. Sep 4 23:52:46.249230 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:52:46.564097 sshd[6559]: Connection closed by 10.0.0.1 port 50714 Sep 4 23:52:46.564654 sshd-session[6557]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:46.569418 systemd[1]: sshd@21-10.0.0.65:22-10.0.0.1:50714.service: Deactivated successfully. Sep 4 23:52:46.572210 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:52:46.573219 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:52:46.574289 systemd-logind[1485]: Removed session 21. Sep 4 23:52:51.577989 systemd[1]: Started sshd@22-10.0.0.65:22-10.0.0.1:42348.service - OpenSSH per-connection server daemon (10.0.0.1:42348). Sep 4 23:52:51.625544 sshd[6573]: Accepted publickey for core from 10.0.0.1 port 42348 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:51.627529 sshd-session[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:51.632110 systemd-logind[1485]: New session 22 of user core. Sep 4 23:52:51.639213 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:52:51.774326 sshd[6575]: Connection closed by 10.0.0.1 port 42348 Sep 4 23:52:51.774811 sshd-session[6573]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:51.789808 systemd[1]: sshd@22-10.0.0.65:22-10.0.0.1:42348.service: Deactivated successfully. Sep 4 23:52:51.792555 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:52:51.794872 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:52:51.804434 systemd[1]: Started sshd@23-10.0.0.65:22-10.0.0.1:42362.service - OpenSSH per-connection server daemon (10.0.0.1:42362). Sep 4 23:52:51.805876 systemd-logind[1485]: Removed session 22. Sep 4 23:52:51.844503 sshd[6588]: Accepted publickey for core from 10.0.0.1 port 42362 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:51.846251 sshd-session[6588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:51.851299 systemd-logind[1485]: New session 23 of user core. Sep 4 23:52:51.859204 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:52:52.121093 kubelet[2675]: E0904 23:52:52.120870 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:52:53.642352 sshd[6591]: Connection closed by 10.0.0.1 port 42362 Sep 4 23:52:53.647381 sshd-session[6588]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:53.699551 systemd[1]: Started sshd@24-10.0.0.65:22-10.0.0.1:42370.service - OpenSSH per-connection server daemon (10.0.0.1:42370). Sep 4 23:52:53.702538 systemd[1]: sshd@23-10.0.0.65:22-10.0.0.1:42362.service: Deactivated successfully. Sep 4 23:52:53.705987 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:52:53.714341 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:52:53.718237 systemd-logind[1485]: Removed session 23. Sep 4 23:52:53.800728 sshd[6601]: Accepted publickey for core from 10.0.0.1 port 42370 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:53.801654 sshd-session[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:53.830014 systemd-logind[1485]: New session 24 of user core. Sep 4 23:52:53.836442 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:52:56.041574 sshd[6606]: Connection closed by 10.0.0.1 port 42370 Sep 4 23:52:56.044568 sshd-session[6601]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:56.056559 systemd[1]: sshd@24-10.0.0.65:22-10.0.0.1:42370.service: Deactivated successfully. Sep 4 23:52:56.058806 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:52:56.059749 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:52:56.068387 systemd[1]: Started sshd@25-10.0.0.65:22-10.0.0.1:42380.service - OpenSSH per-connection server daemon (10.0.0.1:42380). Sep 4 23:52:56.070316 systemd-logind[1485]: Removed session 24. Sep 4 23:52:56.107855 sshd[6655]: Accepted publickey for core from 10.0.0.1 port 42380 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:56.109834 sshd-session[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:56.115025 systemd-logind[1485]: New session 25 of user core. Sep 4 23:52:56.123258 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:52:57.169066 sshd[6658]: Connection closed by 10.0.0.1 port 42380 Sep 4 23:52:57.170612 sshd-session[6655]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:57.182899 systemd[1]: sshd@25-10.0.0.65:22-10.0.0.1:42380.service: Deactivated successfully. Sep 4 23:52:57.185357 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:52:57.186241 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:52:57.196461 systemd[1]: Started sshd@26-10.0.0.65:22-10.0.0.1:42384.service - OpenSSH per-connection server daemon (10.0.0.1:42384). Sep 4 23:52:57.197753 systemd-logind[1485]: Removed session 25. Sep 4 23:52:57.252652 sshd[6668]: Accepted publickey for core from 10.0.0.1 port 42384 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:52:57.254515 sshd-session[6668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:52:57.260594 systemd-logind[1485]: New session 26 of user core. Sep 4 23:52:57.270185 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:52:57.453569 sshd[6671]: Connection closed by 10.0.0.1 port 42384 Sep 4 23:52:57.453873 sshd-session[6668]: pam_unix(sshd:session): session closed for user core Sep 4 23:52:57.457717 systemd[1]: sshd@26-10.0.0.65:22-10.0.0.1:42384.service: Deactivated successfully. Sep 4 23:52:57.460953 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:52:57.463494 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:52:57.464997 systemd-logind[1485]: Removed session 26. Sep 4 23:53:01.120202 kubelet[2675]: E0904 23:53:01.120120 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:02.478592 systemd[1]: Started sshd@27-10.0.0.65:22-10.0.0.1:49962.service - OpenSSH per-connection server daemon (10.0.0.1:49962). Sep 4 23:53:02.539579 sshd[6685]: Accepted publickey for core from 10.0.0.1 port 49962 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:53:02.542176 sshd-session[6685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:02.550281 systemd-logind[1485]: New session 27 of user core. Sep 4 23:53:02.558291 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:53:02.760731 sshd[6687]: Connection closed by 10.0.0.1 port 49962 Sep 4 23:53:02.761955 sshd-session[6685]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:02.770109 systemd-logind[1485]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:53:02.771807 systemd[1]: sshd@27-10.0.0.65:22-10.0.0.1:49962.service: Deactivated successfully. Sep 4 23:53:02.775919 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:53:02.782748 systemd-logind[1485]: Removed session 27. Sep 4 23:53:07.120865 kubelet[2675]: E0904 23:53:07.120677 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:07.780491 systemd[1]: Started sshd@28-10.0.0.65:22-10.0.0.1:49978.service - OpenSSH per-connection server daemon (10.0.0.1:49978). Sep 4 23:53:07.835549 sshd[6701]: Accepted publickey for core from 10.0.0.1 port 49978 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:53:07.838164 sshd-session[6701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:07.844437 systemd-logind[1485]: New session 28 of user core. Sep 4 23:53:07.852635 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 23:53:08.036612 sshd[6703]: Connection closed by 10.0.0.1 port 49978 Sep 4 23:53:08.037528 sshd-session[6701]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:08.044254 systemd-logind[1485]: Session 28 logged out. Waiting for processes to exit. Sep 4 23:53:08.045603 systemd[1]: sshd@28-10.0.0.65:22-10.0.0.1:49978.service: Deactivated successfully. Sep 4 23:53:08.049293 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 23:53:08.050890 systemd-logind[1485]: Removed session 28. Sep 4 23:53:13.078249 systemd[1]: Started sshd@29-10.0.0.65:22-10.0.0.1:50136.service - OpenSSH per-connection server daemon (10.0.0.1:50136). Sep 4 23:53:13.234556 sshd[6743]: Accepted publickey for core from 10.0.0.1 port 50136 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:53:13.233172 sshd-session[6743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:13.266171 systemd-logind[1485]: New session 29 of user core. Sep 4 23:53:13.273873 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 23:53:13.631031 sshd[6748]: Connection closed by 10.0.0.1 port 50136 Sep 4 23:53:13.631602 sshd-session[6743]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:13.636840 systemd[1]: sshd@29-10.0.0.65:22-10.0.0.1:50136.service: Deactivated successfully. Sep 4 23:53:13.640979 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 23:53:13.642116 systemd-logind[1485]: Session 29 logged out. Waiting for processes to exit. Sep 4 23:53:13.643454 systemd-logind[1485]: Removed session 29. Sep 4 23:53:17.119928 kubelet[2675]: E0904 23:53:17.119868 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:17.301481 kubelet[2675]: I0904 23:53:17.301397 2675 scope.go:117] "RemoveContainer" containerID="fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a" Sep 4 23:53:17.304490 containerd[1500]: time="2025-09-04T23:53:17.303787626Z" level=info msg="RemoveContainer for \"fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a\"" Sep 4 23:53:17.317643 containerd[1500]: time="2025-09-04T23:53:17.317529346Z" level=info msg="RemoveContainer for \"fae48c1284ad1dfa2a3d45116582520e9618aec1102d0da4e7f1d8504de68b5a\" returns successfully" Sep 4 23:53:17.318238 kubelet[2675]: I0904 23:53:17.318160 2675 scope.go:117] "RemoveContainer" containerID="9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4" Sep 4 23:53:17.320479 containerd[1500]: time="2025-09-04T23:53:17.320424615Z" level=info msg="RemoveContainer for \"9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4\"" Sep 4 23:53:17.329508 containerd[1500]: time="2025-09-04T23:53:17.329423733Z" level=info msg="RemoveContainer for \"9eaac3da614830af1187daaab103b9b93190b63742ea5e864a3f5207d2be6da4\" returns successfully" Sep 4 23:53:17.331709 containerd[1500]: time="2025-09-04T23:53:17.331672160Z" level=info msg="StopPodSandbox for \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\"" Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.472 [WARNING][6793] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.473 [INFO][6793] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.473 [INFO][6793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" iface="eth0" netns="" Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.473 [INFO][6793] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.473 [INFO][6793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.569 [INFO][6801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.569 [INFO][6801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.569 [INFO][6801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.579 [WARNING][6801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.579 [INFO][6801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.581 [INFO][6801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:53:17.592734 containerd[1500]: 2025-09-04 23:53:17.587 [INFO][6793] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:53:17.592734 containerd[1500]: time="2025-09-04T23:53:17.592687160Z" level=info msg="TearDown network for sandbox \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" successfully" Sep 4 23:53:17.592734 containerd[1500]: time="2025-09-04T23:53:17.592729702Z" level=info msg="StopPodSandbox for \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" returns successfully" Sep 4 23:53:17.594444 containerd[1500]: time="2025-09-04T23:53:17.593408055Z" level=info msg="RemovePodSandbox for \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\"" Sep 4 23:53:17.599994 containerd[1500]: time="2025-09-04T23:53:17.599887348Z" level=info msg="Forcibly stopping sandbox \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\"" Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.690 [WARNING][6819] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" WorkloadEndpoint="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.690 [INFO][6819] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.690 [INFO][6819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" iface="eth0" netns="" Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.690 [INFO][6819] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.690 [INFO][6819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.749 [INFO][6828] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.752 [INFO][6828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.752 [INFO][6828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.761 [WARNING][6828] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.764 [INFO][6828] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" HandleID="k8s-pod-network.a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Workload="localhost-k8s-whisker--57d55db56d--ts945-eth0" Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.766 [INFO][6828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 4 23:53:17.778289 containerd[1500]: 2025-09-04 23:53:17.773 [INFO][6819] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410" Sep 4 23:53:17.780667 containerd[1500]: time="2025-09-04T23:53:17.779155403Z" level=info msg="TearDown network for sandbox \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" successfully" Sep 4 23:53:17.793534 containerd[1500]: time="2025-09-04T23:53:17.793441942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:53:17.793823 containerd[1500]: time="2025-09-04T23:53:17.793576989Z" level=info msg="RemovePodSandbox \"a7ca92188cc8ee0bc716d4426092358b09feb96e2119b32cd1e140608e8e1410\" returns successfully" Sep 4 23:53:18.658529 systemd[1]: Started sshd@30-10.0.0.65:22-10.0.0.1:50152.service - OpenSSH per-connection server daemon (10.0.0.1:50152). Sep 4 23:53:18.753791 sshd[6837]: Accepted publickey for core from 10.0.0.1 port 50152 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:53:18.756157 sshd-session[6837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:18.764391 systemd-logind[1485]: New session 30 of user core. Sep 4 23:53:18.773346 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 4 23:53:19.051624 sshd[6839]: Connection closed by 10.0.0.1 port 50152 Sep 4 23:53:19.052107 sshd-session[6837]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:19.057437 systemd[1]: sshd@30-10.0.0.65:22-10.0.0.1:50152.service: Deactivated successfully. Sep 4 23:53:19.060532 systemd[1]: session-30.scope: Deactivated successfully. Sep 4 23:53:19.061616 systemd-logind[1485]: Session 30 logged out. Waiting for processes to exit. Sep 4 23:53:19.063025 systemd-logind[1485]: Removed session 30. Sep 4 23:53:24.076850 systemd[1]: Started sshd@31-10.0.0.65:22-10.0.0.1:50106.service - OpenSSH per-connection server daemon (10.0.0.1:50106). Sep 4 23:53:24.120662 sshd[6901]: Accepted publickey for core from 10.0.0.1 port 50106 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:53:24.123139 sshd-session[6901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:24.129830 systemd-logind[1485]: New session 31 of user core. Sep 4 23:53:24.143562 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 4 23:53:24.334593 sshd[6904]: Connection closed by 10.0.0.1 port 50106 Sep 4 23:53:24.335443 sshd-session[6901]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:24.341854 systemd-logind[1485]: Session 31 logged out. Waiting for processes to exit. Sep 4 23:53:24.342253 systemd[1]: sshd@31-10.0.0.65:22-10.0.0.1:50106.service: Deactivated successfully. Sep 4 23:53:24.345271 systemd[1]: session-31.scope: Deactivated successfully. Sep 4 23:53:24.346433 systemd-logind[1485]: Removed session 31. Sep 4 23:53:29.125832 kubelet[2675]: E0904 23:53:29.125724 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 23:53:29.361677 systemd[1]: Started sshd@32-10.0.0.65:22-10.0.0.1:50116.service - OpenSSH per-connection server daemon (10.0.0.1:50116). Sep 4 23:53:29.464928 sshd[6943]: Accepted publickey for core from 10.0.0.1 port 50116 ssh2: RSA SHA256:KJomDBayMF7IjhhE4k9X0SaWwDs4kRcmJUI7JCImWwA Sep 4 23:53:29.462941 sshd-session[6943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:53:29.480980 systemd-logind[1485]: New session 32 of user core. Sep 4 23:53:29.488353 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 4 23:53:29.738391 sshd[6945]: Connection closed by 10.0.0.1 port 50116 Sep 4 23:53:29.739736 sshd-session[6943]: pam_unix(sshd:session): session closed for user core Sep 4 23:53:29.746801 systemd[1]: sshd@32-10.0.0.65:22-10.0.0.1:50116.service: Deactivated successfully. Sep 4 23:53:29.750873 systemd[1]: session-32.scope: Deactivated successfully. Sep 4 23:53:29.758664 systemd-logind[1485]: Session 32 logged out. Waiting for processes to exit. Sep 4 23:53:29.760030 systemd-logind[1485]: Removed session 32.