Sep 9 00:31:41.033415 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:41:17 -00 2025 Sep 9 00:31:41.033438 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:31:41.033450 kernel: BIOS-provided physical RAM map: Sep 9 00:31:41.033456 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 00:31:41.033462 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 00:31:41.033468 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 00:31:41.033476 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 9 00:31:41.033482 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 9 00:31:41.033488 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 00:31:41.033497 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 00:31:41.033503 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:31:41.033509 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 00:31:41.033515 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:31:41.033522 kernel: NX (Execute Disable) protection: active Sep 9 00:31:41.033529 kernel: APIC: Static calls initialized Sep 9 00:31:41.033544 kernel: SMBIOS 2.8 present. Sep 9 00:31:41.033551 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 9 00:31:41.033557 kernel: Hypervisor detected: KVM Sep 9 00:31:41.033564 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:31:41.033571 kernel: kvm-clock: using sched offset of 2701626760 cycles Sep 9 00:31:41.033578 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:31:41.034898 kernel: tsc: Detected 2794.750 MHz processor Sep 9 00:31:41.034927 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:31:41.034936 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:31:41.034943 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 9 00:31:41.034957 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 00:31:41.034965 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:31:41.034972 kernel: Using GB pages for direct mapping Sep 9 00:31:41.034979 kernel: ACPI: Early table checksum verification disabled Sep 9 00:31:41.034987 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 9 00:31:41.034994 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:41.035001 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:41.035008 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:41.035018 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 9 00:31:41.035025 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:41.035032 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:41.035039 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:41.035046 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:41.035053 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 9 00:31:41.035061 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 9 00:31:41.035072 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 9 00:31:41.035082 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 9 00:31:41.035089 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 9 00:31:41.035096 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 9 00:31:41.035104 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 9 00:31:41.035111 kernel: No NUMA configuration found Sep 9 00:31:41.035118 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 9 00:31:41.035126 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 9 00:31:41.035136 kernel: Zone ranges: Sep 9 00:31:41.035143 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:31:41.035150 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 9 00:31:41.035158 kernel: Normal empty Sep 9 00:31:41.035165 kernel: Movable zone start for each node Sep 9 00:31:41.035172 kernel: Early memory node ranges Sep 9 00:31:41.035180 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 00:31:41.035187 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 9 00:31:41.035194 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 9 00:31:41.035204 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:31:41.035211 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 00:31:41.035219 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 9 00:31:41.035226 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:31:41.035243 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:31:41.035251 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:31:41.035258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:31:41.035266 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:31:41.035274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:31:41.035284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:31:41.035291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:31:41.035300 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:31:41.035310 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:31:41.035319 kernel: TSC deadline timer available Sep 9 00:31:41.035328 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 9 00:31:41.035337 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:31:41.035346 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:31:41.035360 kernel: kvm-guest: setup PV sched yield Sep 9 00:31:41.035372 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 00:31:41.035382 kernel: Booting paravirtualized kernel on KVM Sep 9 00:31:41.035391 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:31:41.035401 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:31:41.035410 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 9 00:31:41.035419 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 9 00:31:41.035428 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:31:41.035437 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:31:41.035447 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:31:41.035461 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:31:41.035472 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:31:41.035481 kernel: random: crng init done Sep 9 00:31:41.035490 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:31:41.035500 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:31:41.035509 kernel: Fallback order for Node 0: 0 Sep 9 00:31:41.035518 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 9 00:31:41.035527 kernel: Policy zone: DMA32 Sep 9 00:31:41.035535 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:31:41.035545 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42880K init, 2316K bss, 136900K reserved, 0K cma-reserved) Sep 9 00:31:41.035553 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:31:41.035560 kernel: ftrace: allocating 37969 entries in 149 pages Sep 9 00:31:41.035568 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 00:31:41.035575 kernel: Dynamic Preempt: voluntary Sep 9 00:31:41.035582 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:31:41.035591 kernel: rcu: RCU event tracing is enabled. Sep 9 00:31:41.035598 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:31:41.035606 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:31:41.035616 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:31:41.035623 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:31:41.035630 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:31:41.035638 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:31:41.035645 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:31:41.035653 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:31:41.035660 kernel: Console: colour VGA+ 80x25 Sep 9 00:31:41.035668 kernel: printk: console [ttyS0] enabled Sep 9 00:31:41.035675 kernel: ACPI: Core revision 20230628 Sep 9 00:31:41.035685 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:31:41.035693 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:31:41.035700 kernel: x2apic enabled Sep 9 00:31:41.035708 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:31:41.035715 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:31:41.035723 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:31:41.035731 kernel: kvm-guest: setup PV IPIs Sep 9 00:31:41.035748 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:31:41.035756 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 9 00:31:41.035764 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 9 00:31:41.035772 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:31:41.035779 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:31:41.035789 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:31:41.035797 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:31:41.035805 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:31:41.035813 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:31:41.035820 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:31:41.035831 kernel: active return thunk: retbleed_return_thunk Sep 9 00:31:41.035852 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:31:41.035860 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:31:41.035878 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:31:41.035894 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:31:41.035902 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:31:41.035910 kernel: active return thunk: srso_return_thunk Sep 9 00:31:41.035918 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:31:41.035930 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:31:41.035938 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:31:41.035945 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:31:41.035953 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:31:41.035961 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:31:41.035969 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:31:41.035976 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:31:41.035984 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:31:41.035992 kernel: landlock: Up and running. Sep 9 00:31:41.036002 kernel: SELinux: Initializing. Sep 9 00:31:41.036010 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:31:41.036017 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:31:41.036025 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:31:41.036033 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:31:41.036041 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:31:41.036049 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:31:41.036057 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:31:41.036067 kernel: ... version: 0 Sep 9 00:31:41.036075 kernel: ... bit width: 48 Sep 9 00:31:41.036082 kernel: ... generic registers: 6 Sep 9 00:31:41.036090 kernel: ... value mask: 0000ffffffffffff Sep 9 00:31:41.036098 kernel: ... max period: 00007fffffffffff Sep 9 00:31:41.036105 kernel: ... fixed-purpose events: 0 Sep 9 00:31:41.036113 kernel: ... event mask: 000000000000003f Sep 9 00:31:41.036121 kernel: signal: max sigframe size: 1776 Sep 9 00:31:41.036129 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:31:41.036137 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:31:41.036147 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:31:41.036155 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:31:41.036162 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:31:41.036170 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:31:41.036178 kernel: smpboot: Max logical packages: 1 Sep 9 00:31:41.036185 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 9 00:31:41.036193 kernel: devtmpfs: initialized Sep 9 00:31:41.036201 kernel: x86/mm: Memory block size: 128MB Sep 9 00:31:41.036209 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:31:41.036220 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:31:41.036227 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:31:41.036244 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:31:41.036252 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:31:41.036260 kernel: audit: type=2000 audit(1757377900.738:1): state=initialized audit_enabled=0 res=1 Sep 9 00:31:41.036267 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:31:41.036275 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:31:41.036283 kernel: cpuidle: using governor menu Sep 9 00:31:41.036290 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:31:41.036301 kernel: dca service started, version 1.12.1 Sep 9 00:31:41.036309 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 9 00:31:41.036317 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 00:31:41.036324 kernel: PCI: Using configuration type 1 for base access Sep 9 00:31:41.036333 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:31:41.036341 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:31:41.036348 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:31:41.036356 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:31:41.036364 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:31:41.036374 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:31:41.036382 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:31:41.036390 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:31:41.036398 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:31:41.036405 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 00:31:41.036413 kernel: ACPI: Interpreter enabled Sep 9 00:31:41.036421 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:31:41.036428 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:31:41.036436 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:31:41.036446 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:31:41.036454 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:31:41.036462 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:31:41.036685 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:31:41.036874 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:31:41.037039 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:31:41.037057 kernel: PCI host bridge to bus 0000:00 Sep 9 00:31:41.037219 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:31:41.037365 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:31:41.037488 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:31:41.037609 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 9 00:31:41.037734 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:31:41.037887 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 9 00:31:41.038011 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:31:41.038203 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 9 00:31:41.038386 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 9 00:31:41.038538 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 9 00:31:41.038694 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 9 00:31:41.038853 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 9 00:31:41.039008 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:31:41.039193 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:31:41.039366 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 9 00:31:41.039520 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 9 00:31:41.039670 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 9 00:31:41.039908 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 9 00:31:41.040061 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 9 00:31:41.040201 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 9 00:31:41.040376 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 9 00:31:41.040545 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 9 00:31:41.040689 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 9 00:31:41.040822 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 9 00:31:41.040978 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 9 00:31:41.041112 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 9 00:31:41.041281 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 9 00:31:41.041421 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:31:41.041613 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 9 00:31:41.041771 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 9 00:31:41.041924 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 9 00:31:41.042113 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 9 00:31:41.042283 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 9 00:31:41.042300 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:31:41.042317 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:31:41.042328 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:31:41.042339 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:31:41.042351 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:31:41.042361 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:31:41.042368 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:31:41.042376 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:31:41.042384 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:31:41.042392 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:31:41.042402 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:31:41.042410 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:31:41.042418 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:31:41.042427 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:31:41.042437 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:31:41.042445 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:31:41.042453 kernel: iommu: Default domain type: Translated Sep 9 00:31:41.042461 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:31:41.042468 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:31:41.042479 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:31:41.042487 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 00:31:41.042495 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 9 00:31:41.042639 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:31:41.042789 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:31:41.042964 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:31:41.042979 kernel: vgaarb: loaded Sep 9 00:31:41.042990 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:31:41.043005 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:31:41.043017 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:31:41.043028 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:31:41.043040 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:31:41.043051 kernel: pnp: PnP ACPI init Sep 9 00:31:41.043244 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 00:31:41.043261 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:31:41.043274 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:31:41.043288 kernel: NET: Registered PF_INET protocol family Sep 9 00:31:41.043296 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:31:41.043304 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:31:41.043311 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:31:41.043319 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:31:41.043327 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:31:41.043336 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:31:41.043346 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:31:41.043357 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:31:41.043371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:31:41.043380 kernel: NET: Registered PF_XDP protocol family Sep 9 00:31:41.043510 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:31:41.043630 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:31:41.043765 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:31:41.043917 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 9 00:31:41.044066 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 00:31:41.044207 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 9 00:31:41.044223 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:31:41.044231 kernel: Initialise system trusted keyrings Sep 9 00:31:41.044251 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:31:41.044262 kernel: Key type asymmetric registered Sep 9 00:31:41.044272 kernel: Asymmetric key parser 'x509' registered Sep 9 00:31:41.044281 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 00:31:41.044289 kernel: io scheduler mq-deadline registered Sep 9 00:31:41.044297 kernel: io scheduler kyber registered Sep 9 00:31:41.044306 kernel: io scheduler bfq registered Sep 9 00:31:41.044316 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:31:41.044331 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:31:41.044343 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:31:41.044353 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:31:41.044360 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:31:41.044368 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:31:41.044376 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:31:41.044384 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:31:41.044392 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:31:41.044400 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:31:41.044574 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:31:41.044707 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:31:41.044838 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:31:40 UTC (1757377900) Sep 9 00:31:41.044985 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 9 00:31:41.044997 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:31:41.045004 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:31:41.045012 kernel: Segment Routing with IPv6 Sep 9 00:31:41.045026 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:31:41.045037 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:31:41.045047 kernel: Key type dns_resolver registered Sep 9 00:31:41.045057 kernel: IPI shorthand broadcast: enabled Sep 9 00:31:41.045069 kernel: sched_clock: Marking stable (921003241, 167177458)->(1211899702, -123719003) Sep 9 00:31:41.045079 kernel: registered taskstats version 1 Sep 9 00:31:41.045087 kernel: Loading compiled-in X.509 certificates Sep 9 00:31:41.045095 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: cc5240ef94b546331b2896cdc739274c03278c51' Sep 9 00:31:41.045103 kernel: Key type .fscrypt registered Sep 9 00:31:41.045113 kernel: Key type fscrypt-provisioning registered Sep 9 00:31:41.045121 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:31:41.045129 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:31:41.045137 kernel: ima: No architecture policies found Sep 9 00:31:41.045144 kernel: clk: Disabling unused clocks Sep 9 00:31:41.045152 kernel: Freeing unused kernel image (initmem) memory: 42880K Sep 9 00:31:41.045160 kernel: Write protecting the kernel read-only data: 36864k Sep 9 00:31:41.045168 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 9 00:31:41.045175 kernel: Run /init as init process Sep 9 00:31:41.045187 kernel: with arguments: Sep 9 00:31:41.045197 kernel: /init Sep 9 00:31:41.045207 kernel: with environment: Sep 9 00:31:41.045217 kernel: HOME=/ Sep 9 00:31:41.045227 kernel: TERM=linux Sep 9 00:31:41.045248 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:31:41.045262 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:31:41.045277 systemd[1]: Detected virtualization kvm. Sep 9 00:31:41.045294 systemd[1]: Detected architecture x86-64. Sep 9 00:31:41.045305 systemd[1]: Running in initrd. Sep 9 00:31:41.045315 systemd[1]: No hostname configured, using default hostname. Sep 9 00:31:41.045324 systemd[1]: Hostname set to . Sep 9 00:31:41.045332 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:31:41.045341 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:31:41.045349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:31:41.045358 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:31:41.045371 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:31:41.045394 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:31:41.045405 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:31:41.045414 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:31:41.045424 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:31:41.045436 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:31:41.045445 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:31:41.045456 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:31:41.045467 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:31:41.045479 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:31:41.045491 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:31:41.045503 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:31:41.045514 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:31:41.045529 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:31:41.045540 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:31:41.045553 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:31:41.045565 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:31:41.045577 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:31:41.045590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:31:41.045601 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:31:41.045614 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:31:41.045625 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:31:41.045641 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:31:41.045653 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:31:41.045665 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:31:41.045677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:31:41.045689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:31:41.045702 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:31:41.045713 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:31:41.045721 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:31:41.045755 systemd-journald[192]: Collecting audit messages is disabled. Sep 9 00:31:41.045782 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:31:41.045795 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:31:41.045804 systemd-journald[192]: Journal started Sep 9 00:31:41.045824 systemd-journald[192]: Runtime Journal (/run/log/journal/fb3a2ada3e0a49b090a6bb10d22f9791) is 6.0M, max 48.4M, 42.3M free. Sep 9 00:31:41.028613 systemd-modules-load[194]: Inserted module 'overlay' Sep 9 00:31:41.081857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:31:41.081893 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:31:41.081908 kernel: Bridge firewalling registered Sep 9 00:31:41.061766 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 9 00:31:41.086725 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:31:41.087367 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:31:41.095139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:41.112000 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:31:41.112914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:31:41.117038 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:31:41.117390 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:31:41.130428 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:31:41.133096 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:31:41.142107 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:31:41.143527 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:31:41.148551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:31:41.156727 dracut-cmdline[227]: dracut-dracut-053 Sep 9 00:31:41.159784 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:31:41.196299 systemd-resolved[232]: Positive Trust Anchors: Sep 9 00:31:41.196316 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:31:41.196357 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:31:41.208785 systemd-resolved[232]: Defaulting to hostname 'linux'. Sep 9 00:31:41.211227 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:31:41.212694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:31:41.289902 kernel: SCSI subsystem initialized Sep 9 00:31:41.301883 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:31:41.313906 kernel: iscsi: registered transport (tcp) Sep 9 00:31:41.371920 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:31:41.372010 kernel: QLogic iSCSI HBA Driver Sep 9 00:31:41.432489 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:31:41.453061 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:31:41.507908 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:31:41.508000 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:31:41.508032 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:31:41.559911 kernel: raid6: avx2x4 gen() 22678 MB/s Sep 9 00:31:41.576898 kernel: raid6: avx2x2 gen() 23953 MB/s Sep 9 00:31:41.594150 kernel: raid6: avx2x1 gen() 19051 MB/s Sep 9 00:31:41.594214 kernel: raid6: using algorithm avx2x2 gen() 23953 MB/s Sep 9 00:31:41.612129 kernel: raid6: .... xor() 14802 MB/s, rmw enabled Sep 9 00:31:41.612231 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:31:41.637903 kernel: xor: automatically using best checksumming function avx Sep 9 00:31:41.815892 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:31:41.832978 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:31:41.846243 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:31:41.871530 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 9 00:31:41.879314 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:31:41.886044 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:31:41.903779 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Sep 9 00:31:41.940548 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:31:41.949289 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:31:42.023202 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:31:42.034071 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:31:42.049758 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:31:42.051025 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:31:42.053773 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:31:42.057803 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:31:42.068038 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:31:42.080947 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:31:42.089877 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:31:42.091879 kernel: libata version 3.00 loaded. Sep 9 00:31:42.095867 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:31:42.099902 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:31:42.100888 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:31:42.103962 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:31:42.108886 kernel: AVX2 version of gcm_enc/dec engaged. Sep 9 00:31:42.108160 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:31:42.111691 kernel: AES CTR mode by8 optimization enabled Sep 9 00:31:42.112527 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:31:42.116879 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 9 00:31:42.117085 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:31:42.117642 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:31:42.127726 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:31:42.127753 kernel: GPT:9289727 != 19775487 Sep 9 00:31:42.127765 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:31:42.127777 kernel: GPT:9289727 != 19775487 Sep 9 00:31:42.127788 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:31:42.127800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:31:42.127812 kernel: scsi host0: ahci Sep 9 00:31:42.128059 kernel: scsi host1: ahci Sep 9 00:31:42.128245 kernel: scsi host2: ahci Sep 9 00:31:42.120380 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:31:42.133295 kernel: scsi host3: ahci Sep 9 00:31:42.133531 kernel: scsi host4: ahci Sep 9 00:31:42.133714 kernel: scsi host5: ahci Sep 9 00:31:42.133942 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 9 00:31:42.133954 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 9 00:31:42.120557 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:42.141147 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 9 00:31:42.141588 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 9 00:31:42.141614 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 9 00:31:42.141630 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 9 00:31:42.130371 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:31:42.153951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:31:42.163978 kernel: BTRFS: device fsid 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (476) Sep 9 00:31:42.166898 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Sep 9 00:31:42.179368 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:31:42.187614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:31:42.206045 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:31:42.240606 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:31:42.243652 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:42.253016 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:31:42.268117 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:31:42.271736 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:31:42.298477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:31:42.464896 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:42.464981 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:42.465010 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:42.465960 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:42.466881 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:42.467879 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:31:42.469313 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:31:42.469357 kernel: ata3.00: applying bridge limits Sep 9 00:31:42.469369 kernel: ata3.00: configured for UDMA/100 Sep 9 00:31:42.471865 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:31:42.519885 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:31:42.520280 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:31:42.533892 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:31:42.544341 disk-uuid[555]: Primary Header is updated. Sep 9 00:31:42.544341 disk-uuid[555]: Secondary Entries is updated. Sep 9 00:31:42.544341 disk-uuid[555]: Secondary Header is updated. Sep 9 00:31:42.549902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:31:42.554886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:31:43.600874 disk-uuid[577]: The operation has completed successfully. Sep 9 00:31:43.602922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:31:43.660365 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:31:43.660510 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:31:43.675035 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:31:43.679323 sh[593]: Success Sep 9 00:31:43.701575 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 9 00:31:43.748304 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:31:43.764980 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:31:43.769231 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:31:43.807892 kernel: BTRFS info (device dm-0): first mount of filesystem 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a Sep 9 00:31:43.807966 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:31:43.809861 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:31:43.809889 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:31:43.810797 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:31:43.816758 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:31:43.817918 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:31:43.827116 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:31:43.829610 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:31:43.844642 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:43.844735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:31:43.844750 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:31:43.849916 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:31:43.864148 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:31:43.880730 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:43.907896 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:31:43.919098 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:31:44.091469 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:31:44.128351 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:31:44.132263 ignition[719]: Ignition 2.19.0 Sep 9 00:31:44.132281 ignition[719]: Stage: fetch-offline Sep 9 00:31:44.132508 ignition[719]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:44.132525 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:44.135136 ignition[719]: parsed url from cmdline: "" Sep 9 00:31:44.136373 ignition[719]: no config URL provided Sep 9 00:31:44.136386 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:31:44.136399 ignition[719]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:31:44.136437 ignition[719]: op(1): [started] loading QEMU firmware config module Sep 9 00:31:44.136444 ignition[719]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:31:44.147737 ignition[719]: op(1): [finished] loading QEMU firmware config module Sep 9 00:31:44.156222 systemd-networkd[779]: lo: Link UP Sep 9 00:31:44.156239 systemd-networkd[779]: lo: Gained carrier Sep 9 00:31:44.158495 systemd-networkd[779]: Enumeration completed Sep 9 00:31:44.159005 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:31:44.159165 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:31:44.159170 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:31:44.160354 systemd-networkd[779]: eth0: Link UP Sep 9 00:31:44.160360 systemd-networkd[779]: eth0: Gained carrier Sep 9 00:31:44.160369 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:31:44.162890 systemd[1]: Reached target network.target - Network. Sep 9 00:31:44.217973 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:31:44.251173 ignition[719]: parsing config with SHA512: 8d947ff7039efd44f25de5b40ab8157c5e72a4a35877b6c7575bb647f7fc3d0d12449a5d5f5dc14d78fa1bd1c533c08883ba6abe7cee7ff7547df6a000d91f4e Sep 9 00:31:44.261084 unknown[719]: fetched base config from "system" Sep 9 00:31:44.295316 unknown[719]: fetched user config from "qemu" Sep 9 00:31:44.296143 ignition[719]: fetch-offline: fetch-offline passed Sep 9 00:31:44.296295 ignition[719]: Ignition finished successfully Sep 9 00:31:44.298824 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:31:44.300280 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:31:44.312110 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:31:44.354624 ignition[787]: Ignition 2.19.0 Sep 9 00:31:44.354649 ignition[787]: Stage: kargs Sep 9 00:31:44.354945 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:44.354961 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:44.356225 ignition[787]: kargs: kargs passed Sep 9 00:31:44.356284 ignition[787]: Ignition finished successfully Sep 9 00:31:44.360296 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:31:44.371073 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:31:44.430037 ignition[795]: Ignition 2.19.0 Sep 9 00:31:44.430049 ignition[795]: Stage: disks Sep 9 00:31:44.437523 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:31:44.430274 ignition[795]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:44.461381 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:31:44.430289 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:44.462780 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:31:44.431201 ignition[795]: disks: disks passed Sep 9 00:31:44.464914 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:31:44.431245 ignition[795]: Ignition finished successfully Sep 9 00:31:44.465928 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:31:44.466973 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:31:44.476061 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:31:44.491574 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:31:45.218164 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:31:45.266017 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:31:45.421868 kernel: EXT4-fs (vda9): mounted filesystem ee55a213-d578-493d-a79b-e10c399cd35c r/w with ordered data mode. Quota mode: none. Sep 9 00:31:45.422469 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:31:45.474490 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:31:45.490032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:31:45.493093 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:31:45.496144 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:31:45.496218 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:31:45.506214 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Sep 9 00:31:45.506238 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:45.506249 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:31:45.506259 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:31:45.496259 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:31:45.509156 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:31:45.511312 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:31:45.512485 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:31:45.516727 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:31:45.562442 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:31:45.567403 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:31:45.573963 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:31:45.578160 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:31:45.692432 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:31:45.710058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:31:45.713477 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:31:45.720175 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:31:45.721477 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:45.765524 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:31:45.987715 ignition[930]: INFO : Ignition 2.19.0 Sep 9 00:31:45.987715 ignition[930]: INFO : Stage: mount Sep 9 00:31:45.989702 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:45.989702 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:45.992799 ignition[930]: INFO : mount: mount passed Sep 9 00:31:45.993760 ignition[930]: INFO : Ignition finished successfully Sep 9 00:31:45.996299 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:31:46.003215 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:31:46.025015 systemd-networkd[779]: eth0: Gained IPv6LL Sep 9 00:31:46.436173 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:31:46.507698 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Sep 9 00:31:46.507768 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:46.507784 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:31:46.509176 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:31:46.513267 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:31:46.514722 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:31:46.548999 ignition[955]: INFO : Ignition 2.19.0 Sep 9 00:31:46.548999 ignition[955]: INFO : Stage: files Sep 9 00:31:46.588418 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:46.588418 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:46.588418 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:31:46.588418 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:31:46.588418 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:31:46.597005 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:31:46.598528 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:31:46.600288 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:31:46.598999 unknown[955]: wrote ssh authorized keys file for user: core Sep 9 00:31:46.620150 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:31:46.620150 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 00:31:46.767215 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:31:47.249422 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:31:47.249422 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:31:47.253703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:31:47.714140 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:31:48.873772 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:31:48.873772 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:31:48.884639 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:31:48.884639 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:31:48.884639 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:31:48.884639 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:31:48.884639 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:31:48.884639 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:31:48.884639 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:31:48.884639 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:31:48.960516 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:31:48.971689 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:31:48.971689 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:31:48.971689 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:31:48.971689 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:31:48.971689 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:31:48.971689 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:31:48.971689 ignition[955]: INFO : files: files passed Sep 9 00:31:48.971689 ignition[955]: INFO : Ignition finished successfully Sep 9 00:31:48.973439 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:31:48.998551 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:31:49.001981 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:31:49.004549 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:31:49.004707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:31:49.056887 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:31:49.060411 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:31:49.060411 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:31:49.086400 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:31:49.063429 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:31:49.084661 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:31:49.093152 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:31:49.127090 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:31:49.127309 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:31:49.130482 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:31:49.132990 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:31:49.135422 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:31:49.152251 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:31:49.170623 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:31:49.173697 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:31:49.188377 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:31:49.230022 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:31:49.232279 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:31:49.234522 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:31:49.234717 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:31:49.237280 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:31:49.239007 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:31:49.241317 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:31:49.243740 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:31:49.246012 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:31:49.248404 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:31:49.317187 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:31:49.319800 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:31:49.322063 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:31:49.324542 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:31:49.326609 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:31:49.326814 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:31:49.329436 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:31:49.331036 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:31:49.333408 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:31:49.333609 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:31:49.336119 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:31:49.336312 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:31:49.339021 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:31:49.339191 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:31:49.341373 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:31:49.343373 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:31:49.344290 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:31:49.346529 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:31:49.348524 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:31:49.350603 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:31:49.350748 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:31:49.352541 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:31:49.352642 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:31:49.354762 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:31:49.354909 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:31:49.356790 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:31:49.356944 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:31:49.442270 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:31:49.444904 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:31:49.445932 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:31:49.446076 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:31:49.448415 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:31:49.448523 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:31:49.455521 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:31:49.465440 ignition[1009]: INFO : Ignition 2.19.0 Sep 9 00:31:49.465440 ignition[1009]: INFO : Stage: umount Sep 9 00:31:49.465440 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:49.465440 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:49.465440 ignition[1009]: INFO : umount: umount passed Sep 9 00:31:49.465440 ignition[1009]: INFO : Ignition finished successfully Sep 9 00:31:49.455641 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:31:49.466602 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:31:49.466747 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:31:49.468921 systemd[1]: Stopped target network.target - Network. Sep 9 00:31:49.470971 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:31:49.471045 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:31:49.516605 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:31:49.516701 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:31:49.518575 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:31:49.518637 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:31:49.520979 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:31:49.521087 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:31:49.523383 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:31:49.525636 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:31:49.529064 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:31:49.529914 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:31:49.530072 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:31:49.530955 systemd-networkd[779]: eth0: DHCPv6 lease lost Sep 9 00:31:49.533264 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:31:49.533455 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:31:49.536723 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:31:49.536946 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:31:49.540530 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:31:49.540616 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:31:49.542214 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:31:49.542291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:31:49.555183 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:31:49.586113 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:31:49.586248 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:31:49.588615 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:31:49.588681 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:31:49.590608 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:31:49.590666 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:31:49.593010 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:31:49.593076 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:31:49.595431 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:31:49.608486 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:31:49.608722 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:31:49.612476 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:31:49.612609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:31:49.614985 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:31:49.615089 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:31:49.616303 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:31:49.616389 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:31:49.618277 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:31:49.618339 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:31:49.620869 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:31:49.620940 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:31:49.622537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:31:49.622596 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:31:49.639036 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:31:49.641335 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:31:49.641411 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:31:49.643719 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 00:31:49.643778 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:31:49.646363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:31:49.646455 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:31:49.648760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:31:49.648827 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:49.651789 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:31:49.651948 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:31:49.654036 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:31:49.656639 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:31:49.670233 systemd[1]: Switching root. Sep 9 00:31:49.704207 systemd-journald[192]: Journal stopped Sep 9 00:31:52.658519 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 9 00:31:52.658588 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:31:52.658606 kernel: SELinux: policy capability open_perms=1 Sep 9 00:31:52.658618 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:31:52.658635 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:31:52.658646 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:31:52.658658 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:31:52.658670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:31:52.658684 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:31:52.658701 kernel: audit: type=1403 audit(1757377911.529:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:31:52.658714 systemd[1]: Successfully loaded SELinux policy in 45.575ms. Sep 9 00:31:52.658729 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.714ms. Sep 9 00:31:52.658742 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:31:52.658755 systemd[1]: Detected virtualization kvm. Sep 9 00:31:52.658768 systemd[1]: Detected architecture x86-64. Sep 9 00:31:52.658784 systemd[1]: Detected first boot. Sep 9 00:31:52.658796 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:31:52.658811 zram_generator::config[1053]: No configuration found. Sep 9 00:31:52.658824 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:31:52.658836 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:31:52.659038 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:31:52.659052 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:31:52.659065 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:31:52.659082 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:31:52.659094 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:31:52.659109 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:31:52.659121 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:31:52.659133 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:31:52.659147 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:31:52.659159 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:31:52.659172 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:31:52.659185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:31:52.659197 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:31:52.659209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:31:52.659224 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:31:52.659237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:31:52.659249 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:31:52.659261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:31:52.659273 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:31:52.659285 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:31:52.659297 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:31:52.659312 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:31:52.659324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:31:52.659336 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:31:52.659349 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:31:52.659362 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:31:52.659374 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:31:52.659386 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:31:52.659398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:31:52.659412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:31:52.659424 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:31:52.659439 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:31:52.659451 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:31:52.659463 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:31:52.659475 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:31:52.659487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:52.659499 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:31:52.659511 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:31:52.659523 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:31:52.659539 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:31:52.659551 systemd[1]: Reached target machines.target - Containers. Sep 9 00:31:52.659563 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:31:52.659575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:31:52.659587 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:31:52.659600 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:31:52.659612 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:31:52.659624 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:31:52.659637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:31:52.659651 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:31:52.659663 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:31:52.659676 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:31:52.659688 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:31:52.659701 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:31:52.659712 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:31:52.659725 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:31:52.659736 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:31:52.659751 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:31:52.659763 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:31:52.659776 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:31:52.659788 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:31:52.659801 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:31:52.659813 systemd[1]: Stopped verity-setup.service. Sep 9 00:31:52.659826 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:52.659854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:31:52.659867 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:31:52.659881 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:31:52.659893 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:31:52.659906 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:31:52.659918 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:31:52.659930 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:31:52.659951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:31:52.659965 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:31:52.659977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:31:52.659990 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:31:52.660002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:31:52.660015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:31:52.660027 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:31:52.660039 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:31:52.660052 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:31:52.660067 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:31:52.660080 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:31:52.660094 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:31:52.660107 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:31:52.660119 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:31:52.660134 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:31:52.660146 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:31:52.660159 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:31:52.660171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:31:52.660183 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:31:52.660214 systemd-journald[1116]: Collecting audit messages is disabled. Sep 9 00:31:52.660237 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:31:52.660252 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:31:52.660265 kernel: fuse: init (API version 7.39) Sep 9 00:31:52.660276 kernel: ACPI: bus type drm_connector registered Sep 9 00:31:52.660288 kernel: loop: module loaded Sep 9 00:31:52.660300 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:31:52.660312 systemd-journald[1116]: Journal started Sep 9 00:31:52.660334 systemd-journald[1116]: Runtime Journal (/run/log/journal/fb3a2ada3e0a49b090a6bb10d22f9791) is 6.0M, max 48.4M, 42.3M free. Sep 9 00:31:52.220741 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:31:52.240568 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:31:52.241095 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:31:52.241480 systemd[1]: systemd-journald.service: Consumed 1.139s CPU time. Sep 9 00:31:52.662375 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:31:52.662402 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:31:52.664868 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:31:52.666557 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:31:52.666746 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:31:52.668235 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:31:52.668426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:31:52.669927 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:31:52.671296 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:31:52.699074 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:31:52.701615 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:31:52.702901 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:31:52.703990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:31:52.706527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:31:52.708104 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:31:52.712203 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:31:52.735244 udevadm[1159]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:31:52.776811 systemd-journald[1116]: Time spent on flushing to /var/log/journal/fb3a2ada3e0a49b090a6bb10d22f9791 is 18.953ms for 958 entries. Sep 9 00:31:52.776811 systemd-journald[1116]: System Journal (/var/log/journal/fb3a2ada3e0a49b090a6bb10d22f9791) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:31:53.486788 systemd-journald[1116]: Received client request to flush runtime journal. Sep 9 00:31:53.486875 kernel: loop0: detected capacity change from 0 to 140768 Sep 9 00:31:53.486911 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:31:53.486950 kernel: loop1: detected capacity change from 0 to 142488 Sep 9 00:31:53.486973 kernel: loop2: detected capacity change from 0 to 229808 Sep 9 00:31:53.486993 kernel: loop3: detected capacity change from 0 to 140768 Sep 9 00:31:53.487012 kernel: loop4: detected capacity change from 0 to 142488 Sep 9 00:31:53.487031 kernel: loop5: detected capacity change from 0 to 229808 Sep 9 00:31:52.785464 systemd-tmpfiles[1122]: ACLs are not supported, ignoring. Sep 9 00:31:52.785478 systemd-tmpfiles[1122]: ACLs are not supported, ignoring. Sep 9 00:31:52.791842 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:31:52.807677 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:31:53.055493 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:31:53.058862 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:31:53.069086 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:31:53.370413 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:31:53.371180 (sd-merge)[1184]: Merged extensions into '/usr'. Sep 9 00:31:53.424635 systemd[1]: Reloading requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:31:53.424665 systemd[1]: Reloading... Sep 9 00:31:53.501051 zram_generator::config[1213]: No configuration found. Sep 9 00:31:53.675311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:31:53.735168 systemd[1]: Reloading finished in 309 ms. Sep 9 00:31:53.792339 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:31:53.824607 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:31:53.826467 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:31:53.841077 systemd[1]: Starting ensure-sysext.service... Sep 9 00:31:53.843757 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:31:53.920767 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:31:53.920790 systemd[1]: Reloading... Sep 9 00:31:53.946262 ldconfig[1124]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:31:53.993890 zram_generator::config[1282]: No configuration found. Sep 9 00:31:54.183234 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:31:54.233827 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:31:54.234105 systemd[1]: Reloading finished in 312 ms. Sep 9 00:31:54.254013 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:31:54.267430 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:31:54.286374 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:31:54.297052 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:31:54.299500 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:31:54.307244 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:54.307493 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:31:54.309348 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:31:54.315955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:31:54.317456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:31:54.318621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:31:54.318784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:54.321098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:54.321262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:31:54.321422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:31:54.321517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:54.323224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:31:54.323461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:31:54.325285 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:31:54.325492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:31:54.340292 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Sep 9 00:31:54.340314 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Sep 9 00:31:54.446598 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:31:54.506701 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:31:54.506918 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:31:54.511400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:54.511621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:31:54.522126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:31:54.584857 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:31:54.586858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:31:54.588198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:31:54.588296 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:54.588859 systemd[1]: Finished ensure-sysext.service. Sep 9 00:31:54.590117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:31:54.590302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:31:54.593978 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:31:54.595024 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:31:54.595204 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:31:54.596798 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:31:54.597029 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:31:54.600210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:31:54.619870 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:31:54.620210 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:31:54.621160 systemd-tmpfiles[1322]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:31:54.621470 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Sep 9 00:31:54.621561 systemd-tmpfiles[1322]: ACLs are not supported, ignoring. Sep 9 00:31:54.625036 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:31:54.625048 systemd-tmpfiles[1322]: Skipping /boot Sep 9 00:31:54.635978 systemd-tmpfiles[1322]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:31:54.635991 systemd-tmpfiles[1322]: Skipping /boot Sep 9 00:31:54.664297 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:31:54.676038 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:31:54.678700 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:31:54.681019 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:31:54.685038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:31:54.693223 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:31:54.695835 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:31:54.700327 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:31:54.802933 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:31:54.807498 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:31:54.819912 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:31:54.834619 augenrules[1370]: No rules Sep 9 00:31:54.836525 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:31:54.923232 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:31:54.925060 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:31:54.930478 systemd-resolved[1347]: Positive Trust Anchors: Sep 9 00:31:54.930513 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:31:54.930562 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:31:54.931288 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:31:54.932338 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:31:54.935447 systemd-resolved[1347]: Defaulting to hostname 'linux'. Sep 9 00:31:54.937141 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:31:54.938350 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:31:54.993780 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:31:55.008046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:31:55.010774 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:31:55.029022 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:31:55.061436 systemd-udevd[1379]: Using default interface naming scheme 'v255'. Sep 9 00:31:55.078823 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:31:55.091476 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:31:55.119901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1390) Sep 9 00:31:55.132014 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:31:55.172054 systemd-networkd[1388]: lo: Link UP Sep 9 00:31:55.172074 systemd-networkd[1388]: lo: Gained carrier Sep 9 00:31:55.174931 systemd-networkd[1388]: Enumeration completed Sep 9 00:31:55.175088 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:31:55.175600 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:31:55.175612 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:31:55.178734 systemd-networkd[1388]: eth0: Link UP Sep 9 00:31:55.178743 systemd-networkd[1388]: eth0: Gained carrier Sep 9 00:31:55.178777 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:31:55.190400 systemd[1]: Reached target network.target - Network. Sep 9 00:31:55.201132 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:31:55.203904 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:31:55.205661 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 9 00:31:55.205749 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:31:56.495869 systemd-timesyncd[1351]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:31:56.495958 systemd-timesyncd[1351]: Initial clock synchronization to Tue 2025-09-09 00:31:56.495413 UTC. Sep 9 00:31:56.496537 systemd-resolved[1347]: Clock change detected. Flushing caches. Sep 9 00:31:56.514544 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 9 00:31:56.503968 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:31:56.520510 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:31:56.524823 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:31:56.675223 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 00:31:56.675319 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:31:56.545220 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:31:56.680570 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 9 00:31:56.680907 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:31:56.702853 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:31:56.713861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:31:56.835562 kernel: kvm_amd: TSC scaling supported Sep 9 00:31:56.835926 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:31:56.835993 kernel: kvm_amd: Nested Paging enabled Sep 9 00:31:56.836096 kernel: kvm_amd: LBR virtualization supported Sep 9 00:31:56.836176 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:31:56.836240 kernel: kvm_amd: Virtual GIF supported Sep 9 00:31:56.856147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:56.870518 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:31:56.910316 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:31:56.924815 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:31:56.935134 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:31:56.979739 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:31:56.981799 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:31:56.983144 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:31:56.984575 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:31:56.986146 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:31:56.988537 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:31:56.990410 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:31:56.992099 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:31:56.993878 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:31:56.993983 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:31:56.995266 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:31:56.997646 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:31:57.001541 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:31:57.017335 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:31:57.020361 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:31:57.022247 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:31:57.023730 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:31:57.024895 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:31:57.026041 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:31:57.026065 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:31:57.027376 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:31:57.029841 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:31:57.034622 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:31:57.037660 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:31:57.040399 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:31:57.042931 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:31:57.045923 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:31:57.046405 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:31:57.052100 jq[1432]: false Sep 9 00:31:57.053622 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:31:57.057145 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:31:57.064190 extend-filesystems[1433]: Found loop3 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found loop4 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found loop5 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found sr0 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found vda Sep 9 00:31:57.065611 extend-filesystems[1433]: Found vda1 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found vda2 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found vda3 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found usr Sep 9 00:31:57.065611 extend-filesystems[1433]: Found vda4 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found vda6 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found vda7 Sep 9 00:31:57.065611 extend-filesystems[1433]: Found vda9 Sep 9 00:31:57.065611 extend-filesystems[1433]: Checking size of /dev/vda9 Sep 9 00:31:57.085181 extend-filesystems[1433]: Resized partition /dev/vda9 Sep 9 00:31:57.065666 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:31:57.078254 dbus-daemon[1431]: [system] SELinux support is enabled Sep 9 00:31:57.088264 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:31:57.070189 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:31:57.070896 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:31:57.084718 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:31:57.095559 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1386) Sep 9 00:31:57.095637 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:31:57.099538 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:31:57.102316 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:31:57.107989 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:31:57.111242 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:31:57.111919 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:31:57.112313 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:31:57.114545 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:31:57.157546 jq[1454]: true Sep 9 00:31:57.157255 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:31:57.157570 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:31:57.168506 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:31:57.193009 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:31:57.193707 update_engine[1447]: I20250909 00:31:57.171414 1447 main.cc:92] Flatcar Update Engine starting Sep 9 00:31:57.193707 update_engine[1447]: I20250909 00:31:57.179944 1447 update_check_scheduler.cc:74] Next update check in 10m42s Sep 9 00:31:57.194085 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:31:57.194085 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:31:57.194085 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:31:57.199008 jq[1458]: true Sep 9 00:31:57.201392 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Sep 9 00:31:57.203358 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:31:57.203691 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:31:57.215397 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Sep 9 00:31:57.215430 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:31:57.216560 systemd-logind[1440]: New seat seat0. Sep 9 00:31:57.217353 tar[1457]: linux-amd64/LICENSE Sep 9 00:31:57.219110 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:31:57.219739 tar[1457]: linux-amd64/helm Sep 9 00:31:57.222595 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:31:57.231039 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:31:57.233513 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:31:57.233652 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:31:57.235200 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:31:57.235313 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:31:57.245865 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:31:57.305676 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:31:57.309593 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:31:57.311694 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:31:57.315777 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:31:57.341494 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:31:57.355836 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:31:57.402103 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:31:57.405535 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:56626.service - OpenSSH per-connection server daemon (10.0.0.1:56626). Sep 9 00:31:57.412268 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:31:57.412672 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:31:57.422310 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:31:57.486815 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:31:57.552230 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:31:57.557323 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:31:57.559855 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:31:57.625248 sshd[1503]: Accepted publickey for core from 10.0.0.1 port 56626 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:31:57.627903 sshd[1503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:57.639696 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:31:57.661132 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:31:57.665919 systemd-logind[1440]: New session 1 of user core. Sep 9 00:31:57.687378 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:31:57.704917 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:31:57.710348 (systemd)[1519]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:31:57.722253 containerd[1459]: time="2025-09-09T00:31:57.722103258Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:31:57.813368 systemd-networkd[1388]: eth0: Gained IPv6LL Sep 9 00:31:57.818233 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:31:57.822227 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:31:57.838041 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:31:57.839899 containerd[1459]: time="2025-09-09T00:31:57.839772267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:57.842968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:31:57.844684 containerd[1459]: time="2025-09-09T00:31:57.844628232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:57.844684 containerd[1459]: time="2025-09-09T00:31:57.844676683Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:31:57.844747 containerd[1459]: time="2025-09-09T00:31:57.844703273Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:31:57.845020 containerd[1459]: time="2025-09-09T00:31:57.844982185Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:31:57.845082 containerd[1459]: time="2025-09-09T00:31:57.845051676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:57.845251 containerd[1459]: time="2025-09-09T00:31:57.845206626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:57.845289 containerd[1459]: time="2025-09-09T00:31:57.845247493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:57.846232 containerd[1459]: time="2025-09-09T00:31:57.845592590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:57.846232 containerd[1459]: time="2025-09-09T00:31:57.845627806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:57.846232 containerd[1459]: time="2025-09-09T00:31:57.845647964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:57.846232 containerd[1459]: time="2025-09-09T00:31:57.845660066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:57.846232 containerd[1459]: time="2025-09-09T00:31:57.845818193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:57.846340 containerd[1459]: time="2025-09-09T00:31:57.846245254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:57.846403 containerd[1459]: time="2025-09-09T00:31:57.846376279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:57.846403 containerd[1459]: time="2025-09-09T00:31:57.846397710Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:31:57.846657 containerd[1459]: time="2025-09-09T00:31:57.846632600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:31:57.846755 containerd[1459]: time="2025-09-09T00:31:57.846731495Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:31:57.851365 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:31:57.855980 containerd[1459]: time="2025-09-09T00:31:57.855939530Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:31:57.856049 containerd[1459]: time="2025-09-09T00:31:57.856026804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:31:57.856075 containerd[1459]: time="2025-09-09T00:31:57.856055147Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:31:57.856190 containerd[1459]: time="2025-09-09T00:31:57.856157850Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:31:57.856190 containerd[1459]: time="2025-09-09T00:31:57.856185381Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:31:57.856588 containerd[1459]: time="2025-09-09T00:31:57.856406757Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:31:57.857148 containerd[1459]: time="2025-09-09T00:31:57.857086100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:31:57.857559 containerd[1459]: time="2025-09-09T00:31:57.857539530Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:31:57.857624 containerd[1459]: time="2025-09-09T00:31:57.857610323Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:31:57.857698 containerd[1459]: time="2025-09-09T00:31:57.857684572Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:31:57.857756 containerd[1459]: time="2025-09-09T00:31:57.857742190Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:31:57.857823 containerd[1459]: time="2025-09-09T00:31:57.857809567Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:31:57.857880 containerd[1459]: time="2025-09-09T00:31:57.857867395Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:31:57.857936 containerd[1459]: time="2025-09-09T00:31:57.857923961Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:31:57.857997 containerd[1459]: time="2025-09-09T00:31:57.857980908Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:31:57.858048 containerd[1459]: time="2025-09-09T00:31:57.858036703Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:31:57.858131 containerd[1459]: time="2025-09-09T00:31:57.858107966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:31:57.858229 containerd[1459]: time="2025-09-09T00:31:57.858209767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:31:57.858361 containerd[1459]: time="2025-09-09T00:31:57.858341715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.858449 containerd[1459]: time="2025-09-09T00:31:57.858435300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.858579 containerd[1459]: time="2025-09-09T00:31:57.858545627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.858646 containerd[1459]: time="2025-09-09T00:31:57.858632840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.858727 containerd[1459]: time="2025-09-09T00:31:57.858712870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.858788 containerd[1459]: time="2025-09-09T00:31:57.858776099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.858838 containerd[1459]: time="2025-09-09T00:31:57.858826804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.858907 containerd[1459]: time="2025-09-09T00:31:57.858893780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.858960 containerd[1459]: time="2025-09-09T00:31:57.858947490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.859031 containerd[1459]: time="2025-09-09T00:31:57.859011881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.859084 containerd[1459]: time="2025-09-09T00:31:57.859072355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.859171 containerd[1459]: time="2025-09-09T00:31:57.859144870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.859252 containerd[1459]: time="2025-09-09T00:31:57.859231573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.859339 containerd[1459]: time="2025-09-09T00:31:57.859316322Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:31:57.859434 containerd[1459]: time="2025-09-09T00:31:57.859412773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.860886626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.860923806Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.861085068Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.861139821Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.861160149Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.861199653Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.861219691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.861258524Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.861297166Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:31:57.862223 containerd[1459]: time="2025-09-09T00:31:57.861314479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:31:57.862514 containerd[1459]: time="2025-09-09T00:31:57.861987631Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:31:57.862514 containerd[1459]: time="2025-09-09T00:31:57.862077910Z" level=info msg="Connect containerd service" Sep 9 00:31:57.862514 containerd[1459]: time="2025-09-09T00:31:57.862144485Z" level=info msg="using legacy CRI server" Sep 9 00:31:57.862514 containerd[1459]: time="2025-09-09T00:31:57.862155856Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:31:57.866911 containerd[1459]: time="2025-09-09T00:31:57.866854847Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:31:57.878186 containerd[1459]: time="2025-09-09T00:31:57.878109659Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:31:57.879784 containerd[1459]: time="2025-09-09T00:31:57.878622281Z" level=info msg="Start subscribing containerd event" Sep 9 00:31:57.879784 containerd[1459]: time="2025-09-09T00:31:57.879055603Z" level=info msg="Start recovering state" Sep 9 00:31:57.879784 containerd[1459]: time="2025-09-09T00:31:57.879239418Z" level=info msg="Start event monitor" Sep 9 00:31:57.879784 containerd[1459]: time="2025-09-09T00:31:57.879283661Z" level=info msg="Start snapshots syncer" Sep 9 00:31:57.879784 containerd[1459]: time="2025-09-09T00:31:57.879330238Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:31:57.879784 containerd[1459]: time="2025-09-09T00:31:57.879358311Z" level=info msg="Start streaming server" Sep 9 00:31:57.882789 containerd[1459]: time="2025-09-09T00:31:57.882603345Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:31:57.882789 containerd[1459]: time="2025-09-09T00:31:57.882733028Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:31:57.883001 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:31:57.884642 containerd[1459]: time="2025-09-09T00:31:57.884599788Z" level=info msg="containerd successfully booted in 0.165326s" Sep 9 00:31:57.911415 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:31:57.911777 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:31:57.914803 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:31:57.917182 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:31:57.956154 systemd[1519]: Queued start job for default target default.target. Sep 9 00:31:57.966407 systemd[1519]: Created slice app.slice - User Application Slice. Sep 9 00:31:57.966860 systemd[1519]: Reached target paths.target - Paths. Sep 9 00:31:57.966880 systemd[1519]: Reached target timers.target - Timers. Sep 9 00:31:57.969038 systemd[1519]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:31:57.992769 systemd[1519]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:31:57.992908 systemd[1519]: Reached target sockets.target - Sockets. Sep 9 00:31:57.992924 systemd[1519]: Reached target basic.target - Basic System. Sep 9 00:31:57.992961 systemd[1519]: Reached target default.target - Main User Target. Sep 9 00:31:57.992998 systemd[1519]: Startup finished in 269ms. Sep 9 00:31:57.993808 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:31:58.059043 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:31:58.181287 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:56628.service - OpenSSH per-connection server daemon (10.0.0.1:56628). Sep 9 00:31:58.206931 tar[1457]: linux-amd64/README.md Sep 9 00:31:58.259500 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:31:58.282051 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 56628 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:31:58.284149 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:58.289337 systemd-logind[1440]: New session 2 of user core. Sep 9 00:31:58.299830 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:31:58.361208 sshd[1552]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:58.426902 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:56628.service: Deactivated successfully. Sep 9 00:31:58.428905 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:31:58.431144 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:31:58.436841 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:56634.service - OpenSSH per-connection server daemon (10.0.0.1:56634). Sep 9 00:31:58.440212 systemd-logind[1440]: Removed session 2. Sep 9 00:31:58.477781 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 56634 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:31:58.479765 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:58.484557 systemd-logind[1440]: New session 3 of user core. Sep 9 00:31:58.494841 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:31:58.791054 sshd[1562]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:58.796945 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:56634.service: Deactivated successfully. Sep 9 00:31:58.800215 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:31:58.801166 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:31:58.802259 systemd-logind[1440]: Removed session 3. Sep 9 00:31:59.578200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:31:59.580266 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:31:59.581861 systemd[1]: Startup finished in 1.129s (kernel) + 10.749s (initrd) + 6.806s (userspace) = 18.685s. Sep 9 00:31:59.595155 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:32:00.337739 kubelet[1573]: E0909 00:32:00.337662 1573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:32:00.342633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:32:00.342891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:32:00.343282 systemd[1]: kubelet.service: Consumed 2.264s CPU time. Sep 9 00:32:08.781491 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:34662.service - OpenSSH per-connection server daemon (10.0.0.1:34662). Sep 9 00:32:08.817980 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 34662 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:32:08.819605 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:08.824032 systemd-logind[1440]: New session 4 of user core. Sep 9 00:32:08.833685 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:32:08.890019 sshd[1586]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:08.905016 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:34662.service: Deactivated successfully. Sep 9 00:32:08.906759 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:32:08.908245 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:32:08.914726 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:34676.service - OpenSSH per-connection server daemon (10.0.0.1:34676). Sep 9 00:32:08.915723 systemd-logind[1440]: Removed session 4. Sep 9 00:32:08.947626 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 34676 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:32:08.949735 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:08.954420 systemd-logind[1440]: New session 5 of user core. Sep 9 00:32:08.967732 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:32:09.017076 sshd[1593]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:09.030928 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:34676.service: Deactivated successfully. Sep 9 00:32:09.032859 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:32:09.034781 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:32:09.054833 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:34690.service - OpenSSH per-connection server daemon (10.0.0.1:34690). Sep 9 00:32:09.055956 systemd-logind[1440]: Removed session 5. Sep 9 00:32:09.087441 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 34690 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:32:09.089350 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:09.094320 systemd-logind[1440]: New session 6 of user core. Sep 9 00:32:09.103756 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:32:09.159116 sshd[1600]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:09.174769 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:34690.service: Deactivated successfully. Sep 9 00:32:09.176644 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:32:09.178361 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:32:09.190762 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:34696.service - OpenSSH per-connection server daemon (10.0.0.1:34696). Sep 9 00:32:09.191854 systemd-logind[1440]: Removed session 6. Sep 9 00:32:09.224432 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 34696 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:32:09.226487 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:09.230993 systemd-logind[1440]: New session 7 of user core. Sep 9 00:32:09.244736 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:32:09.306803 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:32:09.307238 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:32:09.323585 sudo[1610]: pam_unix(sudo:session): session closed for user root Sep 9 00:32:09.325718 sshd[1607]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:09.337306 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:34696.service: Deactivated successfully. Sep 9 00:32:09.339097 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:32:09.341251 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:32:09.342988 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:34706.service - OpenSSH per-connection server daemon (10.0.0.1:34706). Sep 9 00:32:09.343980 systemd-logind[1440]: Removed session 7. Sep 9 00:32:09.384654 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 34706 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:32:09.386521 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:09.391444 systemd-logind[1440]: New session 8 of user core. Sep 9 00:32:09.406602 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:32:09.461876 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:32:09.462230 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:32:09.466736 sudo[1619]: pam_unix(sudo:session): session closed for user root Sep 9 00:32:09.473796 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:32:09.474152 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:32:09.493722 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 00:32:09.495871 auditctl[1622]: No rules Sep 9 00:32:09.497297 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:32:09.497591 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 00:32:09.499731 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:32:09.534741 augenrules[1640]: No rules Sep 9 00:32:09.536668 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:32:09.538322 sudo[1618]: pam_unix(sudo:session): session closed for user root Sep 9 00:32:09.540302 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:09.547391 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:34706.service: Deactivated successfully. Sep 9 00:32:09.549309 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:32:09.550957 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:32:09.552440 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:34708.service - OpenSSH per-connection server daemon (10.0.0.1:34708). Sep 9 00:32:09.553301 systemd-logind[1440]: Removed session 8. Sep 9 00:32:09.589292 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 34708 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:32:09.590707 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:09.594487 systemd-logind[1440]: New session 9 of user core. Sep 9 00:32:09.604590 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:32:09.658045 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:32:09.658409 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:32:10.159709 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:32:10.159876 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:32:10.593576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:32:10.729041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:10.830117 dockerd[1669]: time="2025-09-09T00:32:10.830016672Z" level=info msg="Starting up" Sep 9 00:32:11.040578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:11.046755 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:32:11.471533 kubelet[1699]: E0909 00:32:11.471290 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:32:11.481988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:32:11.482252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:32:11.504258 dockerd[1669]: time="2025-09-09T00:32:11.504162505Z" level=info msg="Loading containers: start." Sep 9 00:32:11.638503 kernel: Initializing XFRM netlink socket Sep 9 00:32:11.734236 systemd-networkd[1388]: docker0: Link UP Sep 9 00:32:11.917135 dockerd[1669]: time="2025-09-09T00:32:11.917066931Z" level=info msg="Loading containers: done." Sep 9 00:32:11.939814 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck848387048-merged.mount: Deactivated successfully. Sep 9 00:32:11.942556 dockerd[1669]: time="2025-09-09T00:32:11.942453350Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:32:11.942677 dockerd[1669]: time="2025-09-09T00:32:11.942653896Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 00:32:11.942865 dockerd[1669]: time="2025-09-09T00:32:11.942826119Z" level=info msg="Daemon has completed initialization" Sep 9 00:32:11.993903 dockerd[1669]: time="2025-09-09T00:32:11.993685090Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:32:11.994004 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:32:13.081140 containerd[1459]: time="2025-09-09T00:32:13.081092198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:32:14.311872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1317670876.mount: Deactivated successfully. Sep 9 00:32:16.323487 containerd[1459]: time="2025-09-09T00:32:16.323388663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:16.324070 containerd[1459]: time="2025-09-09T00:32:16.323988057Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 00:32:16.325477 containerd[1459]: time="2025-09-09T00:32:16.325425051Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:16.329110 containerd[1459]: time="2025-09-09T00:32:16.329062882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:16.330435 containerd[1459]: time="2025-09-09T00:32:16.330407353Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 3.249250534s" Sep 9 00:32:16.330508 containerd[1459]: time="2025-09-09T00:32:16.330440745Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 00:32:16.331322 containerd[1459]: time="2025-09-09T00:32:16.331285079Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:32:18.832430 containerd[1459]: time="2025-09-09T00:32:18.832332151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:18.837192 containerd[1459]: time="2025-09-09T00:32:18.837083971Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 00:32:18.838609 containerd[1459]: time="2025-09-09T00:32:18.838573854Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:18.842372 containerd[1459]: time="2025-09-09T00:32:18.842303618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:18.843731 containerd[1459]: time="2025-09-09T00:32:18.843694325Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 2.512364132s" Sep 9 00:32:18.843806 containerd[1459]: time="2025-09-09T00:32:18.843734721Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 00:32:18.844833 containerd[1459]: time="2025-09-09T00:32:18.844805609Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:32:20.582382 containerd[1459]: time="2025-09-09T00:32:20.582316141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:20.583306 containerd[1459]: time="2025-09-09T00:32:20.583226928Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 00:32:20.584475 containerd[1459]: time="2025-09-09T00:32:20.584417250Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:20.587877 containerd[1459]: time="2025-09-09T00:32:20.587840689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:20.588973 containerd[1459]: time="2025-09-09T00:32:20.588920534Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.744077044s" Sep 9 00:32:20.588973 containerd[1459]: time="2025-09-09T00:32:20.588964376Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 00:32:20.589509 containerd[1459]: time="2025-09-09T00:32:20.589483970Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:32:21.566544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:32:21.588203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:21.810494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:21.816155 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:32:21.902323 kubelet[1907]: E0909 00:32:21.902097 1907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:32:21.907355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:32:21.907728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:32:22.005509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4184394125.mount: Deactivated successfully. Sep 9 00:32:23.879354 containerd[1459]: time="2025-09-09T00:32:23.879250462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:23.907684 containerd[1459]: time="2025-09-09T00:32:23.907582093Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 00:32:23.974618 containerd[1459]: time="2025-09-09T00:32:23.974530993Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:24.029498 containerd[1459]: time="2025-09-09T00:32:24.029350589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:24.030394 containerd[1459]: time="2025-09-09T00:32:24.030335145Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 3.440814917s" Sep 9 00:32:24.030394 containerd[1459]: time="2025-09-09T00:32:24.030384648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 00:32:24.031075 containerd[1459]: time="2025-09-09T00:32:24.031034517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:32:24.993646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649501238.mount: Deactivated successfully. Sep 9 00:32:26.607007 containerd[1459]: time="2025-09-09T00:32:26.606939738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:26.639784 containerd[1459]: time="2025-09-09T00:32:26.639708890Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 00:32:26.654626 containerd[1459]: time="2025-09-09T00:32:26.654567440Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:26.681208 containerd[1459]: time="2025-09-09T00:32:26.681085872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:26.682861 containerd[1459]: time="2025-09-09T00:32:26.682825524Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.651746604s" Sep 9 00:32:26.682944 containerd[1459]: time="2025-09-09T00:32:26.682868715Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 00:32:26.683581 containerd[1459]: time="2025-09-09T00:32:26.683547107Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:32:27.357501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739663082.mount: Deactivated successfully. Sep 9 00:32:27.367726 containerd[1459]: time="2025-09-09T00:32:27.367674947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:27.369022 containerd[1459]: time="2025-09-09T00:32:27.368977258Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:32:27.370216 containerd[1459]: time="2025-09-09T00:32:27.370178330Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:27.372974 containerd[1459]: time="2025-09-09T00:32:27.372945449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:27.373725 containerd[1459]: time="2025-09-09T00:32:27.373658626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 690.079078ms" Sep 9 00:32:27.373725 containerd[1459]: time="2025-09-09T00:32:27.373694914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:32:27.374676 containerd[1459]: time="2025-09-09T00:32:27.374650436Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:32:28.360034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366593912.mount: Deactivated successfully. Sep 9 00:32:32.157800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:32:32.168663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:32.330384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:32.334656 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:32:32.376007 kubelet[2038]: E0909 00:32:32.375906 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:32:32.380430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:32:32.380694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:32:34.043199 containerd[1459]: time="2025-09-09T00:32:34.043108438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:34.043988 containerd[1459]: time="2025-09-09T00:32:34.043944393Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 00:32:34.045497 containerd[1459]: time="2025-09-09T00:32:34.045411743Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:34.051596 containerd[1459]: time="2025-09-09T00:32:34.051542370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:34.052792 containerd[1459]: time="2025-09-09T00:32:34.052738973Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 6.67805255s" Sep 9 00:32:34.052792 containerd[1459]: time="2025-09-09T00:32:34.052789349Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 00:32:37.399142 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:37.408816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:37.440439 systemd[1]: Reloading requested from client PID 2081 ('systemctl') (unit session-9.scope)... Sep 9 00:32:37.440453 systemd[1]: Reloading... Sep 9 00:32:37.519504 zram_generator::config[2126]: No configuration found. Sep 9 00:32:38.341279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:32:38.423857 systemd[1]: Reloading finished in 982 ms. Sep 9 00:32:38.474322 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:32:38.474415 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:32:38.474832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:38.477147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:38.648568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:38.654199 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:32:38.687953 kubelet[2169]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:32:38.687953 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:32:38.687953 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:32:38.688457 kubelet[2169]: I0909 00:32:38.688004 2169 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:32:40.062872 kubelet[2169]: I0909 00:32:40.062798 2169 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:32:40.062872 kubelet[2169]: I0909 00:32:40.062842 2169 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:32:40.063704 kubelet[2169]: I0909 00:32:40.063152 2169 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:32:40.090925 kubelet[2169]: E0909 00:32:40.090675 2169 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:32:40.092719 kubelet[2169]: I0909 00:32:40.092666 2169 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:32:40.099491 kubelet[2169]: E0909 00:32:40.099438 2169 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:32:40.099585 kubelet[2169]: I0909 00:32:40.099492 2169 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:32:40.105010 kubelet[2169]: I0909 00:32:40.104979 2169 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:32:40.105252 kubelet[2169]: I0909 00:32:40.105218 2169 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:32:40.105417 kubelet[2169]: I0909 00:32:40.105240 2169 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:32:40.105550 kubelet[2169]: I0909 00:32:40.105434 2169 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:32:40.105550 kubelet[2169]: I0909 00:32:40.105445 2169 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:32:40.105642 kubelet[2169]: I0909 00:32:40.105624 2169 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:32:40.107867 kubelet[2169]: I0909 00:32:40.107837 2169 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:32:40.107867 kubelet[2169]: I0909 00:32:40.107858 2169 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:32:40.107949 kubelet[2169]: I0909 00:32:40.107892 2169 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:32:40.107949 kubelet[2169]: I0909 00:32:40.107913 2169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:32:40.114001 kubelet[2169]: E0909 00:32:40.113960 2169 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:32:40.114001 kubelet[2169]: E0909 00:32:40.113959 2169 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:32:40.114608 kubelet[2169]: I0909 00:32:40.114586 2169 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:32:40.115977 kubelet[2169]: I0909 00:32:40.115527 2169 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:32:40.116503 kubelet[2169]: W0909 00:32:40.116258 2169 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:32:40.120616 kubelet[2169]: I0909 00:32:40.120588 2169 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:32:40.120690 kubelet[2169]: I0909 00:32:40.120654 2169 server.go:1289] "Started kubelet" Sep 9 00:32:40.121581 kubelet[2169]: I0909 00:32:40.121019 2169 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:32:40.121581 kubelet[2169]: I0909 00:32:40.121542 2169 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:32:40.121697 kubelet[2169]: I0909 00:32:40.121565 2169 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:32:40.123704 kubelet[2169]: I0909 00:32:40.122119 2169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:32:40.123704 kubelet[2169]: I0909 00:32:40.123431 2169 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:32:40.126240 kubelet[2169]: I0909 00:32:40.126159 2169 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:32:40.126343 kubelet[2169]: E0909 00:32:40.126242 2169 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:40.126343 kubelet[2169]: I0909 00:32:40.126316 2169 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:32:40.127782 kubelet[2169]: E0909 00:32:40.124932 2169 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186375f199170bb7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:32:40.120617911 +0000 UTC m=+1.461886040,LastTimestamp:2025-09-09 00:32:40.120617911 +0000 UTC m=+1.461886040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:32:40.127782 kubelet[2169]: E0909 00:32:40.126857 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Sep 9 00:32:40.127782 kubelet[2169]: E0909 00:32:40.126991 2169 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:32:40.127782 kubelet[2169]: I0909 00:32:40.127000 2169 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:32:40.127782 kubelet[2169]: I0909 00:32:40.127098 2169 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:32:40.127782 kubelet[2169]: E0909 00:32:40.127730 2169 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:32:40.128046 kubelet[2169]: I0909 00:32:40.127733 2169 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:32:40.128927 kubelet[2169]: I0909 00:32:40.128893 2169 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:32:40.128927 kubelet[2169]: I0909 00:32:40.128908 2169 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:32:40.145754 kubelet[2169]: I0909 00:32:40.145719 2169 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:32:40.145754 kubelet[2169]: I0909 00:32:40.145743 2169 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:32:40.145754 kubelet[2169]: I0909 00:32:40.145773 2169 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:32:40.147272 kubelet[2169]: I0909 00:32:40.147235 2169 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:32:40.148821 kubelet[2169]: I0909 00:32:40.148800 2169 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:32:40.148885 kubelet[2169]: I0909 00:32:40.148866 2169 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:32:40.148932 kubelet[2169]: I0909 00:32:40.148897 2169 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:32:40.148932 kubelet[2169]: I0909 00:32:40.148914 2169 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:32:40.149029 kubelet[2169]: E0909 00:32:40.148963 2169 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:32:40.150314 kubelet[2169]: E0909 00:32:40.149553 2169 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:32:40.150314 kubelet[2169]: I0909 00:32:40.150137 2169 policy_none.go:49] "None policy: Start" Sep 9 00:32:40.150314 kubelet[2169]: I0909 00:32:40.150172 2169 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:32:40.150314 kubelet[2169]: I0909 00:32:40.150195 2169 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:32:40.155731 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:32:40.169493 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:32:40.172342 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:32:40.193372 kubelet[2169]: E0909 00:32:40.193325 2169 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:32:40.193624 kubelet[2169]: I0909 00:32:40.193608 2169 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:32:40.193815 kubelet[2169]: I0909 00:32:40.193631 2169 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:32:40.193899 kubelet[2169]: I0909 00:32:40.193884 2169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:32:40.194552 kubelet[2169]: E0909 00:32:40.194527 2169 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:32:40.194608 kubelet[2169]: E0909 00:32:40.194587 2169 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:32:40.263201 systemd[1]: Created slice kubepods-burstable-podcd0a65ffc1e4ee9cd95810de2223bcbe.slice - libcontainer container kubepods-burstable-podcd0a65ffc1e4ee9cd95810de2223bcbe.slice. Sep 9 00:32:40.277084 kubelet[2169]: E0909 00:32:40.277032 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:40.280235 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:32:40.290706 kubelet[2169]: E0909 00:32:40.290673 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:40.293656 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:32:40.294916 kubelet[2169]: I0909 00:32:40.294893 2169 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:32:40.295314 kubelet[2169]: E0909 00:32:40.295284 2169 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 9 00:32:40.295792 kubelet[2169]: E0909 00:32:40.295733 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:40.328508 kubelet[2169]: E0909 00:32:40.328300 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Sep 9 00:32:40.428902 kubelet[2169]: I0909 00:32:40.428822 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd0a65ffc1e4ee9cd95810de2223bcbe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd0a65ffc1e4ee9cd95810de2223bcbe\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:40.428902 kubelet[2169]: I0909 00:32:40.428887 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:40.428902 kubelet[2169]: I0909 00:32:40.428915 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:40.429156 kubelet[2169]: I0909 00:32:40.428943 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:40.429156 kubelet[2169]: I0909 00:32:40.428996 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:40.429156 kubelet[2169]: I0909 00:32:40.429022 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:32:40.429156 kubelet[2169]: I0909 00:32:40.429047 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd0a65ffc1e4ee9cd95810de2223bcbe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd0a65ffc1e4ee9cd95810de2223bcbe\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:40.429156 kubelet[2169]: I0909 00:32:40.429069 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd0a65ffc1e4ee9cd95810de2223bcbe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd0a65ffc1e4ee9cd95810de2223bcbe\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:40.429296 kubelet[2169]: I0909 00:32:40.429090 2169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:40.497359 kubelet[2169]: I0909 00:32:40.497303 2169 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:32:40.497685 kubelet[2169]: E0909 00:32:40.497651 2169 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 9 00:32:40.577659 kubelet[2169]: E0909 00:32:40.577598 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:40.578441 containerd[1459]: time="2025-09-09T00:32:40.578390054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd0a65ffc1e4ee9cd95810de2223bcbe,Namespace:kube-system,Attempt:0,}" Sep 9 00:32:40.591887 kubelet[2169]: E0909 00:32:40.591756 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:40.592546 containerd[1459]: time="2025-09-09T00:32:40.592431444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:32:40.596645 kubelet[2169]: E0909 00:32:40.596622 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:40.597100 containerd[1459]: time="2025-09-09T00:32:40.597058353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:32:40.729834 kubelet[2169]: E0909 00:32:40.729783 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Sep 9 00:32:40.899896 kubelet[2169]: I0909 00:32:40.899782 2169 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:32:40.900156 kubelet[2169]: E0909 00:32:40.900118 2169 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 9 00:32:40.999497 kubelet[2169]: E0909 00:32:40.999434 2169 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:32:41.072077 kubelet[2169]: E0909 00:32:41.072017 2169 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:32:41.154149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538064101.mount: Deactivated successfully. Sep 9 00:32:41.162905 containerd[1459]: time="2025-09-09T00:32:41.162803039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:41.164915 containerd[1459]: time="2025-09-09T00:32:41.164845091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:32:41.166184 containerd[1459]: time="2025-09-09T00:32:41.166130578Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:41.167343 containerd[1459]: time="2025-09-09T00:32:41.167236565Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:41.168229 containerd[1459]: time="2025-09-09T00:32:41.168196526Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:41.169165 containerd[1459]: time="2025-09-09T00:32:41.169104908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:32:41.169896 containerd[1459]: time="2025-09-09T00:32:41.169843568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 9 00:32:41.173326 containerd[1459]: time="2025-09-09T00:32:41.173280745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:41.175967 containerd[1459]: time="2025-09-09T00:32:41.175908597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 597.398005ms" Sep 9 00:32:41.176883 containerd[1459]: time="2025-09-09T00:32:41.176819964Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.677553ms" Sep 9 00:32:41.177663 containerd[1459]: time="2025-09-09T00:32:41.177625802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 585.000621ms" Sep 9 00:32:41.289159 containerd[1459]: time="2025-09-09T00:32:41.289043994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:41.289159 containerd[1459]: time="2025-09-09T00:32:41.289099559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:41.289159 containerd[1459]: time="2025-09-09T00:32:41.289110610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:41.289391 containerd[1459]: time="2025-09-09T00:32:41.289187787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:41.295105 containerd[1459]: time="2025-09-09T00:32:41.294987833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:41.295105 containerd[1459]: time="2025-09-09T00:32:41.295024081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:41.295273 containerd[1459]: time="2025-09-09T00:32:41.295116567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:41.295273 containerd[1459]: time="2025-09-09T00:32:41.295099625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:41.295273 containerd[1459]: time="2025-09-09T00:32:41.295145883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:41.295273 containerd[1459]: time="2025-09-09T00:32:41.295173104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:41.295558 containerd[1459]: time="2025-09-09T00:32:41.295286800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:41.295558 containerd[1459]: time="2025-09-09T00:32:41.295315143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:41.320681 systemd[1]: Started cri-containerd-d06f67aa07358a642bfba93b04a83f64a365a9a878b15412771c1500f337c1df.scope - libcontainer container d06f67aa07358a642bfba93b04a83f64a365a9a878b15412771c1500f337c1df. Sep 9 00:32:41.326384 systemd[1]: Started cri-containerd-4a248dfb665c43b6b2add32ad7882557892ca4acae62335ae51b8d5e7b063931.scope - libcontainer container 4a248dfb665c43b6b2add32ad7882557892ca4acae62335ae51b8d5e7b063931. Sep 9 00:32:41.328960 systemd[1]: Started cri-containerd-529686b8493ae142e265510a117895276766e391a043f7117d73ab60cd92e2a6.scope - libcontainer container 529686b8493ae142e265510a117895276766e391a043f7117d73ab60cd92e2a6. Sep 9 00:32:41.368393 containerd[1459]: time="2025-09-09T00:32:41.368339672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd0a65ffc1e4ee9cd95810de2223bcbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"d06f67aa07358a642bfba93b04a83f64a365a9a878b15412771c1500f337c1df\"" Sep 9 00:32:41.369845 kubelet[2169]: E0909 00:32:41.369709 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:41.377661 containerd[1459]: time="2025-09-09T00:32:41.377612984Z" level=info msg="CreateContainer within sandbox \"d06f67aa07358a642bfba93b04a83f64a365a9a878b15412771c1500f337c1df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:32:41.379135 containerd[1459]: time="2025-09-09T00:32:41.379049788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a248dfb665c43b6b2add32ad7882557892ca4acae62335ae51b8d5e7b063931\"" Sep 9 00:32:41.381394 kubelet[2169]: E0909 00:32:41.381367 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:41.384750 containerd[1459]: time="2025-09-09T00:32:41.384707976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"529686b8493ae142e265510a117895276766e391a043f7117d73ab60cd92e2a6\"" Sep 9 00:32:41.385781 kubelet[2169]: E0909 00:32:41.385746 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:41.388138 containerd[1459]: time="2025-09-09T00:32:41.388111278Z" level=info msg="CreateContainer within sandbox \"4a248dfb665c43b6b2add32ad7882557892ca4acae62335ae51b8d5e7b063931\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:32:41.391388 containerd[1459]: time="2025-09-09T00:32:41.391357693Z" level=info msg="CreateContainer within sandbox \"529686b8493ae142e265510a117895276766e391a043f7117d73ab60cd92e2a6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:32:41.413289 containerd[1459]: time="2025-09-09T00:32:41.413151974Z" level=info msg="CreateContainer within sandbox \"d06f67aa07358a642bfba93b04a83f64a365a9a878b15412771c1500f337c1df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"87744c98a512b2ca41ccac733b8313e07653cdaa3d5694e854d1626cc7b7bf20\"" Sep 9 00:32:41.413981 containerd[1459]: time="2025-09-09T00:32:41.413944356Z" level=info msg="StartContainer for \"87744c98a512b2ca41ccac733b8313e07653cdaa3d5694e854d1626cc7b7bf20\"" Sep 9 00:32:41.418885 containerd[1459]: time="2025-09-09T00:32:41.418750128Z" level=info msg="CreateContainer within sandbox \"4a248dfb665c43b6b2add32ad7882557892ca4acae62335ae51b8d5e7b063931\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"af60783cb8d6697392e839e31057cf429f0084cecbdb2c94f43275e727717d4f\"" Sep 9 00:32:41.419401 containerd[1459]: time="2025-09-09T00:32:41.419381294Z" level=info msg="StartContainer for \"af60783cb8d6697392e839e31057cf429f0084cecbdb2c94f43275e727717d4f\"" Sep 9 00:32:41.420599 containerd[1459]: time="2025-09-09T00:32:41.420577282Z" level=info msg="CreateContainer within sandbox \"529686b8493ae142e265510a117895276766e391a043f7117d73ab60cd92e2a6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"139b0cb49c96987f59c3c95e40f698723f46bd4f2a3fad6cd1751fd11e19c832\"" Sep 9 00:32:41.421952 containerd[1459]: time="2025-09-09T00:32:41.421104291Z" level=info msg="StartContainer for \"139b0cb49c96987f59c3c95e40f698723f46bd4f2a3fad6cd1751fd11e19c832\"" Sep 9 00:32:41.449637 systemd[1]: Started cri-containerd-87744c98a512b2ca41ccac733b8313e07653cdaa3d5694e854d1626cc7b7bf20.scope - libcontainer container 87744c98a512b2ca41ccac733b8313e07653cdaa3d5694e854d1626cc7b7bf20. Sep 9 00:32:41.460647 systemd[1]: Started cri-containerd-139b0cb49c96987f59c3c95e40f698723f46bd4f2a3fad6cd1751fd11e19c832.scope - libcontainer container 139b0cb49c96987f59c3c95e40f698723f46bd4f2a3fad6cd1751fd11e19c832. Sep 9 00:32:41.462110 systemd[1]: Started cri-containerd-af60783cb8d6697392e839e31057cf429f0084cecbdb2c94f43275e727717d4f.scope - libcontainer container af60783cb8d6697392e839e31057cf429f0084cecbdb2c94f43275e727717d4f. Sep 9 00:32:41.504497 containerd[1459]: time="2025-09-09T00:32:41.503194698Z" level=info msg="StartContainer for \"87744c98a512b2ca41ccac733b8313e07653cdaa3d5694e854d1626cc7b7bf20\" returns successfully" Sep 9 00:32:41.514608 containerd[1459]: time="2025-09-09T00:32:41.514534027Z" level=info msg="StartContainer for \"af60783cb8d6697392e839e31057cf429f0084cecbdb2c94f43275e727717d4f\" returns successfully" Sep 9 00:32:41.514830 containerd[1459]: time="2025-09-09T00:32:41.514793119Z" level=info msg="StartContainer for \"139b0cb49c96987f59c3c95e40f698723f46bd4f2a3fad6cd1751fd11e19c832\" returns successfully" Sep 9 00:32:41.531044 kubelet[2169]: E0909 00:32:41.530987 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="1.6s" Sep 9 00:32:41.540511 kubelet[2169]: E0909 00:32:41.539785 2169 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:32:41.547578 kubelet[2169]: E0909 00:32:41.547529 2169 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:32:41.702266 kubelet[2169]: I0909 00:32:41.702120 2169 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:32:42.159120 kubelet[2169]: E0909 00:32:42.159076 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:42.159570 kubelet[2169]: E0909 00:32:42.159225 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:42.163442 kubelet[2169]: E0909 00:32:42.163406 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:42.164861 kubelet[2169]: E0909 00:32:42.163537 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:42.164861 kubelet[2169]: E0909 00:32:42.164516 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:42.164861 kubelet[2169]: E0909 00:32:42.164618 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:42.898778 update_engine[1447]: I20250909 00:32:42.898546 1447 update_attempter.cc:509] Updating boot flags... Sep 9 00:32:43.167388 kubelet[2169]: E0909 00:32:43.167241 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:43.168831 kubelet[2169]: E0909 00:32:43.167400 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:43.168831 kubelet[2169]: E0909 00:32:43.168200 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:43.168831 kubelet[2169]: E0909 00:32:43.168289 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:43.168831 kubelet[2169]: E0909 00:32:43.168347 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:43.168831 kubelet[2169]: E0909 00:32:43.168385 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:43.965192 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2466) Sep 9 00:32:43.997538 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2467) Sep 9 00:32:44.172668 kubelet[2169]: E0909 00:32:44.172615 2169 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:32:44.173296 kubelet[2169]: E0909 00:32:44.172844 2169 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:44.815679 kubelet[2169]: E0909 00:32:44.815626 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:32:44.902760 kubelet[2169]: I0909 00:32:44.902240 2169 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:32:44.902760 kubelet[2169]: E0909 00:32:44.902287 2169 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:32:44.927797 kubelet[2169]: I0909 00:32:44.926793 2169 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:44.944586 kubelet[2169]: E0909 00:32:44.944535 2169 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:44.946780 kubelet[2169]: I0909 00:32:44.946504 2169 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:44.950875 kubelet[2169]: E0909 00:32:44.950808 2169 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:44.950875 kubelet[2169]: I0909 00:32:44.950862 2169 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:32:44.953105 kubelet[2169]: E0909 00:32:44.953046 2169 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 00:32:45.111422 kubelet[2169]: I0909 00:32:45.111260 2169 apiserver.go:52] "Watching apiserver" Sep 9 00:32:45.127610 kubelet[2169]: I0909 00:32:45.127544 2169 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:32:47.107336 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-9.scope)... Sep 9 00:32:47.107359 systemd[1]: Reloading... Sep 9 00:32:47.193511 zram_generator::config[2513]: No configuration found. Sep 9 00:32:47.329172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:32:47.428945 systemd[1]: Reloading finished in 320 ms. Sep 9 00:32:47.478152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:47.501890 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:32:47.502162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:47.502229 systemd[1]: kubelet.service: Consumed 1.200s CPU time, 132.0M memory peak, 0B memory swap peak. Sep 9 00:32:47.510722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:47.710196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:47.716425 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:32:47.769366 kubelet[2558]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:32:47.769366 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:32:47.769366 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:32:47.769873 kubelet[2558]: I0909 00:32:47.769416 2558 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:32:47.775498 kubelet[2558]: I0909 00:32:47.775471 2558 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:32:47.775498 kubelet[2558]: I0909 00:32:47.775490 2558 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:32:47.775670 kubelet[2558]: I0909 00:32:47.775650 2558 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:32:47.776841 kubelet[2558]: I0909 00:32:47.776818 2558 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:32:47.782430 kubelet[2558]: I0909 00:32:47.782400 2558 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:32:47.786401 kubelet[2558]: E0909 00:32:47.786357 2558 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:32:47.786401 kubelet[2558]: I0909 00:32:47.786393 2558 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:32:47.790937 kubelet[2558]: I0909 00:32:47.790903 2558 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:32:47.791149 kubelet[2558]: I0909 00:32:47.791119 2558 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:32:47.791279 kubelet[2558]: I0909 00:32:47.791142 2558 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:32:47.791361 kubelet[2558]: I0909 00:32:47.791289 2558 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:32:47.791361 kubelet[2558]: I0909 00:32:47.791300 2558 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:32:47.791361 kubelet[2558]: I0909 00:32:47.791352 2558 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:32:47.791535 kubelet[2558]: I0909 00:32:47.791521 2558 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:32:47.791588 kubelet[2558]: I0909 00:32:47.791543 2558 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:32:47.791588 kubelet[2558]: I0909 00:32:47.791564 2558 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:32:47.791588 kubelet[2558]: I0909 00:32:47.791581 2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:32:47.793657 kubelet[2558]: I0909 00:32:47.793610 2558 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:32:47.794224 kubelet[2558]: I0909 00:32:47.794112 2558 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:32:47.800170 kubelet[2558]: I0909 00:32:47.800137 2558 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:32:47.800253 kubelet[2558]: I0909 00:32:47.800185 2558 server.go:1289] "Started kubelet" Sep 9 00:32:47.802215 kubelet[2558]: I0909 00:32:47.801719 2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:32:47.804914 kubelet[2558]: I0909 00:32:47.804866 2558 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:32:47.805024 kubelet[2558]: I0909 00:32:47.805007 2558 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:32:47.805218 kubelet[2558]: I0909 00:32:47.805180 2558 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:32:47.805955 kubelet[2558]: I0909 00:32:47.805585 2558 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:32:47.805955 kubelet[2558]: I0909 00:32:47.805883 2558 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:32:47.807243 kubelet[2558]: I0909 00:32:47.807176 2558 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:32:47.808201 kubelet[2558]: I0909 00:32:47.808181 2558 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:32:47.810307 kubelet[2558]: I0909 00:32:47.808606 2558 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:32:47.810307 kubelet[2558]: I0909 00:32:47.808685 2558 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:32:47.813958 kubelet[2558]: E0909 00:32:47.813924 2558 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:32:47.814744 kubelet[2558]: I0909 00:32:47.814238 2558 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:32:47.816291 kubelet[2558]: I0909 00:32:47.816272 2558 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:32:47.820280 kubelet[2558]: I0909 00:32:47.820256 2558 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:32:47.821946 kubelet[2558]: I0909 00:32:47.821834 2558 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:32:47.821946 kubelet[2558]: I0909 00:32:47.821862 2558 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:32:47.821946 kubelet[2558]: I0909 00:32:47.821888 2558 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:32:47.821946 kubelet[2558]: I0909 00:32:47.821894 2558 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:32:47.821946 kubelet[2558]: E0909 00:32:47.821932 2558 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:32:47.846969 kubelet[2558]: I0909 00:32:47.846923 2558 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:32:47.846969 kubelet[2558]: I0909 00:32:47.846940 2558 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:32:47.846969 kubelet[2558]: I0909 00:32:47.846964 2558 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:32:47.847200 kubelet[2558]: I0909 00:32:47.847098 2558 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:32:47.847200 kubelet[2558]: I0909 00:32:47.847113 2558 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:32:47.847200 kubelet[2558]: I0909 00:32:47.847132 2558 policy_none.go:49] "None policy: Start" Sep 9 00:32:47.847200 kubelet[2558]: I0909 00:32:47.847144 2558 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:32:47.847200 kubelet[2558]: I0909 00:32:47.847154 2558 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:32:47.847316 kubelet[2558]: I0909 00:32:47.847283 2558 state_mem.go:75] "Updated machine memory state" Sep 9 00:32:47.851720 kubelet[2558]: E0909 00:32:47.851675 2558 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:32:47.851925 kubelet[2558]: I0909 00:32:47.851911 2558 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:32:47.851969 kubelet[2558]: I0909 00:32:47.851929 2558 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:32:47.852144 kubelet[2558]: I0909 00:32:47.852124 2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:32:47.853156 kubelet[2558]: E0909 00:32:47.853125 2558 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:32:47.923437 kubelet[2558]: I0909 00:32:47.923391 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:47.923741 kubelet[2558]: I0909 00:32:47.923704 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:47.923894 kubelet[2558]: I0909 00:32:47.923870 2558 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:32:47.960190 kubelet[2558]: I0909 00:32:47.960145 2558 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:32:48.006654 kubelet[2558]: I0909 00:32:48.006602 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:48.006654 kubelet[2558]: I0909 00:32:48.006648 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:48.006654 kubelet[2558]: I0909 00:32:48.006674 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:48.006954 kubelet[2558]: I0909 00:32:48.006715 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:48.006954 kubelet[2558]: I0909 00:32:48.006740 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd0a65ffc1e4ee9cd95810de2223bcbe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd0a65ffc1e4ee9cd95810de2223bcbe\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:48.006954 kubelet[2558]: I0909 00:32:48.006755 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:48.006954 kubelet[2558]: I0909 00:32:48.006774 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:32:48.006954 kubelet[2558]: I0909 00:32:48.006788 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd0a65ffc1e4ee9cd95810de2223bcbe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd0a65ffc1e4ee9cd95810de2223bcbe\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:48.007113 kubelet[2558]: I0909 00:32:48.006807 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd0a65ffc1e4ee9cd95810de2223bcbe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd0a65ffc1e4ee9cd95810de2223bcbe\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:48.197006 kubelet[2558]: I0909 00:32:48.196952 2558 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:32:48.197176 kubelet[2558]: I0909 00:32:48.197064 2558 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:32:48.231857 kubelet[2558]: E0909 00:32:48.231602 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:48.232896 kubelet[2558]: E0909 00:32:48.231741 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:48.232896 kubelet[2558]: E0909 00:32:48.231791 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:48.794422 kubelet[2558]: I0909 00:32:48.794352 2558 apiserver.go:52] "Watching apiserver" Sep 9 00:32:48.805997 kubelet[2558]: I0909 00:32:48.805953 2558 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:32:48.831235 kubelet[2558]: E0909 00:32:48.831190 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:48.831417 kubelet[2558]: E0909 00:32:48.831365 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:48.832083 kubelet[2558]: E0909 00:32:48.832014 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:48.896674 kubelet[2558]: I0909 00:32:48.896447 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.89640449 podStartE2EDuration="1.89640449s" podCreationTimestamp="2025-09-09 00:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:32:48.896300403 +0000 UTC m=+1.174425613" watchObservedRunningTime="2025-09-09 00:32:48.89640449 +0000 UTC m=+1.174529700" Sep 9 00:32:48.911274 kubelet[2558]: I0909 00:32:48.911175 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.911148382 podStartE2EDuration="1.911148382s" podCreationTimestamp="2025-09-09 00:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:32:48.903057383 +0000 UTC m=+1.181182603" watchObservedRunningTime="2025-09-09 00:32:48.911148382 +0000 UTC m=+1.189273612" Sep 9 00:32:48.911511 kubelet[2558]: I0909 00:32:48.911291 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.911285941 podStartE2EDuration="1.911285941s" podCreationTimestamp="2025-09-09 00:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:32:48.911022864 +0000 UTC m=+1.189148084" watchObservedRunningTime="2025-09-09 00:32:48.911285941 +0000 UTC m=+1.189411151" Sep 9 00:32:49.832389 kubelet[2558]: E0909 00:32:49.832353 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:49.832902 kubelet[2558]: E0909 00:32:49.832417 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:50.833635 kubelet[2558]: E0909 00:32:50.833585 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:52.732278 kubelet[2558]: I0909 00:32:52.732226 2558 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:32:52.732813 containerd[1459]: time="2025-09-09T00:32:52.732735180Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:32:52.733119 kubelet[2558]: I0909 00:32:52.732938 2558 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:32:53.406564 systemd[1]: Created slice kubepods-besteffort-poda1308c2e_2d00_4cfa_9dd6_75cf2c988c97.slice - libcontainer container kubepods-besteffort-poda1308c2e_2d00_4cfa_9dd6_75cf2c988c97.slice. Sep 9 00:32:53.453584 kubelet[2558]: I0909 00:32:53.453536 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1308c2e-2d00-4cfa-9dd6-75cf2c988c97-xtables-lock\") pod \"kube-proxy-lp8jr\" (UID: \"a1308c2e-2d00-4cfa-9dd6-75cf2c988c97\") " pod="kube-system/kube-proxy-lp8jr" Sep 9 00:32:53.453863 kubelet[2558]: I0909 00:32:53.453823 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1308c2e-2d00-4cfa-9dd6-75cf2c988c97-lib-modules\") pod \"kube-proxy-lp8jr\" (UID: \"a1308c2e-2d00-4cfa-9dd6-75cf2c988c97\") " pod="kube-system/kube-proxy-lp8jr" Sep 9 00:32:53.453863 kubelet[2558]: I0909 00:32:53.453852 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1308c2e-2d00-4cfa-9dd6-75cf2c988c97-kube-proxy\") pod \"kube-proxy-lp8jr\" (UID: \"a1308c2e-2d00-4cfa-9dd6-75cf2c988c97\") " pod="kube-system/kube-proxy-lp8jr" Sep 9 00:32:53.453863 kubelet[2558]: I0909 00:32:53.453872 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8x4\" (UniqueName: \"kubernetes.io/projected/a1308c2e-2d00-4cfa-9dd6-75cf2c988c97-kube-api-access-zc8x4\") pod \"kube-proxy-lp8jr\" (UID: \"a1308c2e-2d00-4cfa-9dd6-75cf2c988c97\") " pod="kube-system/kube-proxy-lp8jr" Sep 9 00:32:53.715265 kubelet[2558]: E0909 00:32:53.715076 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:53.716220 containerd[1459]: time="2025-09-09T00:32:53.716166686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lp8jr,Uid:a1308c2e-2d00-4cfa-9dd6-75cf2c988c97,Namespace:kube-system,Attempt:0,}" Sep 9 00:32:53.749970 containerd[1459]: time="2025-09-09T00:32:53.749821363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:53.749970 containerd[1459]: time="2025-09-09T00:32:53.749895192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:53.749970 containerd[1459]: time="2025-09-09T00:32:53.749911413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:53.750640 containerd[1459]: time="2025-09-09T00:32:53.750011702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:53.780707 systemd[1]: Started cri-containerd-889ae30d0e970dac19e700e773336a14dfad2be69745ce7bf5c3fa229e4f5d87.scope - libcontainer container 889ae30d0e970dac19e700e773336a14dfad2be69745ce7bf5c3fa229e4f5d87. Sep 9 00:32:53.808041 containerd[1459]: time="2025-09-09T00:32:53.807992505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lp8jr,Uid:a1308c2e-2d00-4cfa-9dd6-75cf2c988c97,Namespace:kube-system,Attempt:0,} returns sandbox id \"889ae30d0e970dac19e700e773336a14dfad2be69745ce7bf5c3fa229e4f5d87\"" Sep 9 00:32:53.808955 kubelet[2558]: E0909 00:32:53.808932 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:53.836810 containerd[1459]: time="2025-09-09T00:32:53.836772836Z" level=info msg="CreateContainer within sandbox \"889ae30d0e970dac19e700e773336a14dfad2be69745ce7bf5c3fa229e4f5d87\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:32:53.855825 containerd[1459]: time="2025-09-09T00:32:53.855764639Z" level=info msg="CreateContainer within sandbox \"889ae30d0e970dac19e700e773336a14dfad2be69745ce7bf5c3fa229e4f5d87\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c8b43679239d8e1109174df09c9d21790f49716c9cff40cc118773a01f490755\"" Sep 9 00:32:53.856499 containerd[1459]: time="2025-09-09T00:32:53.856246026Z" level=info msg="StartContainer for \"c8b43679239d8e1109174df09c9d21790f49716c9cff40cc118773a01f490755\"" Sep 9 00:32:53.893220 systemd[1]: Started cri-containerd-c8b43679239d8e1109174df09c9d21790f49716c9cff40cc118773a01f490755.scope - libcontainer container c8b43679239d8e1109174df09c9d21790f49716c9cff40cc118773a01f490755. Sep 9 00:32:53.934648 systemd[1]: Created slice kubepods-besteffort-pod34e367bc_848e_40f9_bca1_1f756a7f8d3a.slice - libcontainer container kubepods-besteffort-pod34e367bc_848e_40f9_bca1_1f756a7f8d3a.slice. Sep 9 00:32:53.935308 containerd[1459]: time="2025-09-09T00:32:53.935012283Z" level=info msg="StartContainer for \"c8b43679239d8e1109174df09c9d21790f49716c9cff40cc118773a01f490755\" returns successfully" Sep 9 00:32:53.956489 kubelet[2558]: I0909 00:32:53.956432 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjq6\" (UniqueName: \"kubernetes.io/projected/34e367bc-848e-40f9-bca1-1f756a7f8d3a-kube-api-access-8hjq6\") pod \"tigera-operator-755d956888-h94l2\" (UID: \"34e367bc-848e-40f9-bca1-1f756a7f8d3a\") " pod="tigera-operator/tigera-operator-755d956888-h94l2" Sep 9 00:32:53.956789 kubelet[2558]: I0909 00:32:53.956724 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34e367bc-848e-40f9-bca1-1f756a7f8d3a-var-lib-calico\") pod \"tigera-operator-755d956888-h94l2\" (UID: \"34e367bc-848e-40f9-bca1-1f756a7f8d3a\") " pod="tigera-operator/tigera-operator-755d956888-h94l2" Sep 9 00:32:54.237842 containerd[1459]: time="2025-09-09T00:32:54.237784368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-h94l2,Uid:34e367bc-848e-40f9-bca1-1f756a7f8d3a,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:32:54.271449 containerd[1459]: time="2025-09-09T00:32:54.271270317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:54.271449 containerd[1459]: time="2025-09-09T00:32:54.271396545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:54.271449 containerd[1459]: time="2025-09-09T00:32:54.271417034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:54.271773 containerd[1459]: time="2025-09-09T00:32:54.271682734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:54.293783 systemd[1]: Started cri-containerd-001d3a3db12c43e51f972eb9d6cabaa523d73a639f77c0633b56365d732206fd.scope - libcontainer container 001d3a3db12c43e51f972eb9d6cabaa523d73a639f77c0633b56365d732206fd. Sep 9 00:32:54.333289 containerd[1459]: time="2025-09-09T00:32:54.333220885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-h94l2,Uid:34e367bc-848e-40f9-bca1-1f756a7f8d3a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"001d3a3db12c43e51f972eb9d6cabaa523d73a639f77c0633b56365d732206fd\"" Sep 9 00:32:54.338022 containerd[1459]: time="2025-09-09T00:32:54.337928645Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:32:54.843313 kubelet[2558]: E0909 00:32:54.843265 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:54.853749 kubelet[2558]: I0909 00:32:54.852795 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lp8jr" podStartSLOduration=1.852773886 podStartE2EDuration="1.852773886s" podCreationTimestamp="2025-09-09 00:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:32:54.852656835 +0000 UTC m=+7.130782045" watchObservedRunningTime="2025-09-09 00:32:54.852773886 +0000 UTC m=+7.130899096" Sep 9 00:32:55.838577 kubelet[2558]: E0909 00:32:55.838542 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:55.845091 kubelet[2558]: E0909 00:32:55.845061 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:55.845716 kubelet[2558]: E0909 00:32:55.845258 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:56.082433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3342405031.mount: Deactivated successfully. Sep 9 00:32:57.509119 kubelet[2558]: E0909 00:32:57.509072 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:57.848742 kubelet[2558]: E0909 00:32:57.848419 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:58.496974 containerd[1459]: time="2025-09-09T00:32:58.496872226Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:58.498475 containerd[1459]: time="2025-09-09T00:32:58.498395634Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:32:58.499840 containerd[1459]: time="2025-09-09T00:32:58.499790508Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:58.502671 containerd[1459]: time="2025-09-09T00:32:58.502501169Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:58.503307 containerd[1459]: time="2025-09-09T00:32:58.503250761Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.165276861s" Sep 9 00:32:58.503307 containerd[1459]: time="2025-09-09T00:32:58.503294803Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:32:58.509559 containerd[1459]: time="2025-09-09T00:32:58.509503768Z" level=info msg="CreateContainer within sandbox \"001d3a3db12c43e51f972eb9d6cabaa523d73a639f77c0633b56365d732206fd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:32:58.710193 containerd[1459]: time="2025-09-09T00:32:58.710118442Z" level=info msg="CreateContainer within sandbox \"001d3a3db12c43e51f972eb9d6cabaa523d73a639f77c0633b56365d732206fd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"035f5a07c1370e9a967174a9717640cc491dae72db953f8fb7d2131eb6f8ab85\"" Sep 9 00:32:58.710916 containerd[1459]: time="2025-09-09T00:32:58.710712179Z" level=info msg="StartContainer for \"035f5a07c1370e9a967174a9717640cc491dae72db953f8fb7d2131eb6f8ab85\"" Sep 9 00:32:58.750791 systemd[1]: Started cri-containerd-035f5a07c1370e9a967174a9717640cc491dae72db953f8fb7d2131eb6f8ab85.scope - libcontainer container 035f5a07c1370e9a967174a9717640cc491dae72db953f8fb7d2131eb6f8ab85. Sep 9 00:32:58.783501 containerd[1459]: time="2025-09-09T00:32:58.783427630Z" level=info msg="StartContainer for \"035f5a07c1370e9a967174a9717640cc491dae72db953f8fb7d2131eb6f8ab85\" returns successfully" Sep 9 00:32:58.852522 kubelet[2558]: E0909 00:32:58.852456 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:58.863538 kubelet[2558]: I0909 00:32:58.863429 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-h94l2" podStartSLOduration=1.6957516400000001 podStartE2EDuration="5.863357115s" podCreationTimestamp="2025-09-09 00:32:53 +0000 UTC" firstStartedPulling="2025-09-09 00:32:54.336924012 +0000 UTC m=+6.615049222" lastFinishedPulling="2025-09-09 00:32:58.504529487 +0000 UTC m=+10.782654697" observedRunningTime="2025-09-09 00:32:58.863098889 +0000 UTC m=+11.141224109" watchObservedRunningTime="2025-09-09 00:32:58.863357115 +0000 UTC m=+11.141482325" Sep 9 00:32:59.946982 kubelet[2558]: E0909 00:32:59.946933 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:00.857165 kubelet[2558]: E0909 00:33:00.857114 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:05.019745 sudo[1651]: pam_unix(sudo:session): session closed for user root Sep 9 00:33:05.027513 sshd[1648]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:05.049218 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:34708.service: Deactivated successfully. Sep 9 00:33:05.051567 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:33:05.051793 systemd[1]: session-9.scope: Consumed 6.062s CPU time, 160.7M memory peak, 0B memory swap peak. Sep 9 00:33:05.052318 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:33:05.053451 systemd-logind[1440]: Removed session 9. Sep 9 00:33:07.811833 systemd[1]: Created slice kubepods-besteffort-pod36eb4b6c_dfdd_4e4d_b51a_9893d59e4f09.slice - libcontainer container kubepods-besteffort-pod36eb4b6c_dfdd_4e4d_b51a_9893d59e4f09.slice. Sep 9 00:33:07.878782 kubelet[2558]: I0909 00:33:07.876605 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36eb4b6c-dfdd-4e4d-b51a-9893d59e4f09-tigera-ca-bundle\") pod \"calico-typha-854844fdf6-j2lk5\" (UID: \"36eb4b6c-dfdd-4e4d-b51a-9893d59e4f09\") " pod="calico-system/calico-typha-854844fdf6-j2lk5" Sep 9 00:33:07.878782 kubelet[2558]: I0909 00:33:07.876660 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm2h\" (UniqueName: \"kubernetes.io/projected/36eb4b6c-dfdd-4e4d-b51a-9893d59e4f09-kube-api-access-9lm2h\") pod \"calico-typha-854844fdf6-j2lk5\" (UID: \"36eb4b6c-dfdd-4e4d-b51a-9893d59e4f09\") " pod="calico-system/calico-typha-854844fdf6-j2lk5" Sep 9 00:33:07.878782 kubelet[2558]: I0909 00:33:07.876685 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/36eb4b6c-dfdd-4e4d-b51a-9893d59e4f09-typha-certs\") pod \"calico-typha-854844fdf6-j2lk5\" (UID: \"36eb4b6c-dfdd-4e4d-b51a-9893d59e4f09\") " pod="calico-system/calico-typha-854844fdf6-j2lk5" Sep 9 00:33:07.886516 systemd[1]: Created slice kubepods-besteffort-pod8cec681f_68ca_430f_93e8_0fe9389fa67b.slice - libcontainer container kubepods-besteffort-pod8cec681f_68ca_430f_93e8_0fe9389fa67b.slice. Sep 9 00:33:07.977257 kubelet[2558]: I0909 00:33:07.977193 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpgnn\" (UniqueName: \"kubernetes.io/projected/8cec681f-68ca-430f-93e8-0fe9389fa67b-kube-api-access-zpgnn\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977257 kubelet[2558]: I0909 00:33:07.977259 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8cec681f-68ca-430f-93e8-0fe9389fa67b-flexvol-driver-host\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977490 kubelet[2558]: I0909 00:33:07.977291 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cec681f-68ca-430f-93e8-0fe9389fa67b-tigera-ca-bundle\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977490 kubelet[2558]: I0909 00:33:07.977313 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8cec681f-68ca-430f-93e8-0fe9389fa67b-var-lib-calico\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977490 kubelet[2558]: I0909 00:33:07.977330 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8cec681f-68ca-430f-93e8-0fe9389fa67b-policysync\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977490 kubelet[2558]: I0909 00:33:07.977347 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8cec681f-68ca-430f-93e8-0fe9389fa67b-xtables-lock\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977490 kubelet[2558]: I0909 00:33:07.977372 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8cec681f-68ca-430f-93e8-0fe9389fa67b-cni-bin-dir\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977630 kubelet[2558]: I0909 00:33:07.977389 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8cec681f-68ca-430f-93e8-0fe9389fa67b-lib-modules\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977630 kubelet[2558]: I0909 00:33:07.977405 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8cec681f-68ca-430f-93e8-0fe9389fa67b-var-run-calico\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977630 kubelet[2558]: I0909 00:33:07.977430 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8cec681f-68ca-430f-93e8-0fe9389fa67b-cni-net-dir\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977630 kubelet[2558]: I0909 00:33:07.977457 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8cec681f-68ca-430f-93e8-0fe9389fa67b-cni-log-dir\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:07.977630 kubelet[2558]: I0909 00:33:07.977489 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8cec681f-68ca-430f-93e8-0fe9389fa67b-node-certs\") pod \"calico-node-dfdxp\" (UID: \"8cec681f-68ca-430f-93e8-0fe9389fa67b\") " pod="calico-system/calico-node-dfdxp" Sep 9 00:33:08.081745 kubelet[2558]: E0909 00:33:08.081572 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.081745 kubelet[2558]: W0909 00:33:08.081595 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.093757 kubelet[2558]: E0909 00:33:08.093715 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.094046 kubelet[2558]: E0909 00:33:08.094027 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.094046 kubelet[2558]: W0909 00:33:08.094042 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.094108 kubelet[2558]: E0909 00:33:08.094058 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.094248 kubelet[2558]: E0909 00:33:08.094235 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.094248 kubelet[2558]: W0909 00:33:08.094244 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.094308 kubelet[2558]: E0909 00:33:08.094251 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.119916 kubelet[2558]: E0909 00:33:08.119861 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:08.120427 containerd[1459]: time="2025-09-09T00:33:08.120387468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-854844fdf6-j2lk5,Uid:36eb4b6c-dfdd-4e4d-b51a-9893d59e4f09,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:08.490590 kubelet[2558]: E0909 00:33:08.490530 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mndv" podUID="6bff8f74-70ce-4a82-bbde-261eec1d2cf4" Sep 9 00:33:08.527534 kubelet[2558]: E0909 00:33:08.527492 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.527534 kubelet[2558]: W0909 00:33:08.527524 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.527534 kubelet[2558]: E0909 00:33:08.527549 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.527828 kubelet[2558]: E0909 00:33:08.527813 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.527828 kubelet[2558]: W0909 00:33:08.527827 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.527905 kubelet[2558]: E0909 00:33:08.527839 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.528110 kubelet[2558]: E0909 00:33:08.528091 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.528110 kubelet[2558]: W0909 00:33:08.528106 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.528110 kubelet[2558]: E0909 00:33:08.528118 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.532034 kubelet[2558]: E0909 00:33:08.530645 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.532034 kubelet[2558]: W0909 00:33:08.532009 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.532195 kubelet[2558]: E0909 00:33:08.532083 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.542982 kubelet[2558]: E0909 00:33:08.532417 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.542982 kubelet[2558]: W0909 00:33:08.532426 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.542982 kubelet[2558]: E0909 00:33:08.532436 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.542982 kubelet[2558]: E0909 00:33:08.533034 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.542982 kubelet[2558]: W0909 00:33:08.533047 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.542982 kubelet[2558]: E0909 00:33:08.533060 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.542982 kubelet[2558]: E0909 00:33:08.533280 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.542982 kubelet[2558]: W0909 00:33:08.533289 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.542982 kubelet[2558]: E0909 00:33:08.533305 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.542982 kubelet[2558]: E0909 00:33:08.533545 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.543324 kubelet[2558]: W0909 00:33:08.533566 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.543324 kubelet[2558]: E0909 00:33:08.533576 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.543324 kubelet[2558]: E0909 00:33:08.533782 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.543324 kubelet[2558]: W0909 00:33:08.533789 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.543324 kubelet[2558]: E0909 00:33:08.533798 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.543324 kubelet[2558]: E0909 00:33:08.533983 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.543324 kubelet[2558]: W0909 00:33:08.533991 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.543324 kubelet[2558]: E0909 00:33:08.534000 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.543324 kubelet[2558]: E0909 00:33:08.534188 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.543324 kubelet[2558]: W0909 00:33:08.534195 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.543706 kubelet[2558]: E0909 00:33:08.534206 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.543706 kubelet[2558]: E0909 00:33:08.534379 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.543706 kubelet[2558]: W0909 00:33:08.534386 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.543706 kubelet[2558]: E0909 00:33:08.534406 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.543706 kubelet[2558]: E0909 00:33:08.534606 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.543706 kubelet[2558]: W0909 00:33:08.534617 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.543706 kubelet[2558]: E0909 00:33:08.534637 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.543706 kubelet[2558]: E0909 00:33:08.534838 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.543706 kubelet[2558]: W0909 00:33:08.534847 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.543706 kubelet[2558]: E0909 00:33:08.534856 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.544000 kubelet[2558]: E0909 00:33:08.535086 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.544000 kubelet[2558]: W0909 00:33:08.535096 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.544000 kubelet[2558]: E0909 00:33:08.535106 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.544000 kubelet[2558]: E0909 00:33:08.535285 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.544000 kubelet[2558]: W0909 00:33:08.535293 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.544000 kubelet[2558]: E0909 00:33:08.535300 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.544000 kubelet[2558]: E0909 00:33:08.535493 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.544000 kubelet[2558]: W0909 00:33:08.535501 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.544000 kubelet[2558]: E0909 00:33:08.535508 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.544000 kubelet[2558]: E0909 00:33:08.535693 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.544292 kubelet[2558]: W0909 00:33:08.535700 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.544292 kubelet[2558]: E0909 00:33:08.535709 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.544292 kubelet[2558]: E0909 00:33:08.535864 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.544292 kubelet[2558]: W0909 00:33:08.535871 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.544292 kubelet[2558]: E0909 00:33:08.535878 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.544292 kubelet[2558]: E0909 00:33:08.536039 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.544292 kubelet[2558]: W0909 00:33:08.536047 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.544292 kubelet[2558]: E0909 00:33:08.536056 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.580168 kubelet[2558]: E0909 00:33:08.580109 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.580391 kubelet[2558]: W0909 00:33:08.580363 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.580522 kubelet[2558]: E0909 00:33:08.580423 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.581964 kubelet[2558]: E0909 00:33:08.581937 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.581964 kubelet[2558]: W0909 00:33:08.581958 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.582041 kubelet[2558]: E0909 00:33:08.581973 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.582041 kubelet[2558]: I0909 00:33:08.582007 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bff8f74-70ce-4a82-bbde-261eec1d2cf4-socket-dir\") pod \"csi-node-driver-5mndv\" (UID: \"6bff8f74-70ce-4a82-bbde-261eec1d2cf4\") " pod="calico-system/csi-node-driver-5mndv" Sep 9 00:33:08.585408 kubelet[2558]: E0909 00:33:08.584059 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.585408 kubelet[2558]: W0909 00:33:08.584105 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.585408 kubelet[2558]: E0909 00:33:08.584121 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.585408 kubelet[2558]: I0909 00:33:08.584142 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bff8f74-70ce-4a82-bbde-261eec1d2cf4-registration-dir\") pod \"csi-node-driver-5mndv\" (UID: \"6bff8f74-70ce-4a82-bbde-261eec1d2cf4\") " pod="calico-system/csi-node-driver-5mndv" Sep 9 00:33:08.587001 kubelet[2558]: E0909 00:33:08.586970 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.587001 kubelet[2558]: W0909 00:33:08.586990 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.587001 kubelet[2558]: E0909 00:33:08.587003 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.587518 kubelet[2558]: E0909 00:33:08.587367 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.587518 kubelet[2558]: W0909 00:33:08.587407 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.587518 kubelet[2558]: E0909 00:33:08.587420 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.588206 kubelet[2558]: E0909 00:33:08.588169 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.588206 kubelet[2558]: W0909 00:33:08.588191 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.588206 kubelet[2558]: E0909 00:33:08.588204 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.588206 kubelet[2558]: I0909 00:33:08.588232 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6bff8f74-70ce-4a82-bbde-261eec1d2cf4-varrun\") pod \"csi-node-driver-5mndv\" (UID: \"6bff8f74-70ce-4a82-bbde-261eec1d2cf4\") " pod="calico-system/csi-node-driver-5mndv" Sep 9 00:33:08.591577 kubelet[2558]: E0909 00:33:08.591541 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.591577 kubelet[2558]: W0909 00:33:08.591565 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.591577 kubelet[2558]: E0909 00:33:08.591578 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.591754 kubelet[2558]: I0909 00:33:08.591698 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bff8f74-70ce-4a82-bbde-261eec1d2cf4-kubelet-dir\") pod \"csi-node-driver-5mndv\" (UID: \"6bff8f74-70ce-4a82-bbde-261eec1d2cf4\") " pod="calico-system/csi-node-driver-5mndv" Sep 9 00:33:08.592410 kubelet[2558]: E0909 00:33:08.592382 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.592410 kubelet[2558]: W0909 00:33:08.592400 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.592410 kubelet[2558]: E0909 00:33:08.592410 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.593683 kubelet[2558]: E0909 00:33:08.593658 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.593683 kubelet[2558]: W0909 00:33:08.593675 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.593683 kubelet[2558]: E0909 00:33:08.593685 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.593971 kubelet[2558]: E0909 00:33:08.593948 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.593971 kubelet[2558]: W0909 00:33:08.593964 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.594037 kubelet[2558]: E0909 00:33:08.593973 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.595530 kubelet[2558]: I0909 00:33:08.595503 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n876q\" (UniqueName: \"kubernetes.io/projected/6bff8f74-70ce-4a82-bbde-261eec1d2cf4-kube-api-access-n876q\") pod \"csi-node-driver-5mndv\" (UID: \"6bff8f74-70ce-4a82-bbde-261eec1d2cf4\") " pod="calico-system/csi-node-driver-5mndv" Sep 9 00:33:08.595781 kubelet[2558]: E0909 00:33:08.595759 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.595781 kubelet[2558]: W0909 00:33:08.595774 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.595832 kubelet[2558]: E0909 00:33:08.595784 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.597595 kubelet[2558]: E0909 00:33:08.597570 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.597595 kubelet[2558]: W0909 00:33:08.597587 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.597595 kubelet[2558]: E0909 00:33:08.597598 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.598630 kubelet[2558]: E0909 00:33:08.598606 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.598630 kubelet[2558]: W0909 00:33:08.598622 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.598630 kubelet[2558]: E0909 00:33:08.598631 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.601722 kubelet[2558]: E0909 00:33:08.601692 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.601722 kubelet[2558]: W0909 00:33:08.601710 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.601722 kubelet[2558]: E0909 00:33:08.601721 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.601995 kubelet[2558]: E0909 00:33:08.601938 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.601995 kubelet[2558]: W0909 00:33:08.601953 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.601995 kubelet[2558]: E0909 00:33:08.601962 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.602095 containerd[1459]: time="2025-09-09T00:33:08.601189343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:08.603119 kubelet[2558]: E0909 00:33:08.602173 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.603119 kubelet[2558]: W0909 00:33:08.602187 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.603119 kubelet[2558]: E0909 00:33:08.602197 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.603323 containerd[1459]: time="2025-09-09T00:33:08.603217563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:08.603323 containerd[1459]: time="2025-09-09T00:33:08.603235457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:08.603644 containerd[1459]: time="2025-09-09T00:33:08.603543766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:08.643772 systemd[1]: Started cri-containerd-a4c5a7101fef321e650f6aae2b39107bbb39848c886fce3812ad1e4231b1069e.scope - libcontainer container a4c5a7101fef321e650f6aae2b39107bbb39848c886fce3812ad1e4231b1069e. Sep 9 00:33:08.698883 kubelet[2558]: E0909 00:33:08.698802 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.698883 kubelet[2558]: W0909 00:33:08.698839 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.698883 kubelet[2558]: E0909 00:33:08.698878 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.699864 kubelet[2558]: E0909 00:33:08.699403 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.699864 kubelet[2558]: W0909 00:33:08.699424 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.699864 kubelet[2558]: E0909 00:33:08.699437 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.699969 kubelet[2558]: E0909 00:33:08.699919 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.699969 kubelet[2558]: W0909 00:33:08.699931 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.699969 kubelet[2558]: E0909 00:33:08.699944 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.700352 kubelet[2558]: E0909 00:33:08.700317 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.700352 kubelet[2558]: W0909 00:33:08.700338 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.700501 kubelet[2558]: E0909 00:33:08.700362 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.701034 kubelet[2558]: E0909 00:33:08.701013 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.701034 kubelet[2558]: W0909 00:33:08.701029 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.701107 kubelet[2558]: E0909 00:33:08.701041 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.701753 kubelet[2558]: E0909 00:33:08.701385 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.701753 kubelet[2558]: W0909 00:33:08.701403 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.701753 kubelet[2558]: E0909 00:33:08.701415 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.702934 kubelet[2558]: E0909 00:33:08.702858 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.702934 kubelet[2558]: W0909 00:33:08.702885 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.702934 kubelet[2558]: E0909 00:33:08.702915 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.703308 kubelet[2558]: E0909 00:33:08.703290 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.703308 kubelet[2558]: W0909 00:33:08.703305 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.703378 kubelet[2558]: E0909 00:33:08.703316 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.703607 kubelet[2558]: E0909 00:33:08.703593 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.703607 kubelet[2558]: W0909 00:33:08.703604 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.703665 kubelet[2558]: E0909 00:33:08.703613 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.703914 kubelet[2558]: E0909 00:33:08.703898 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.703914 kubelet[2558]: W0909 00:33:08.703910 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.703969 kubelet[2558]: E0909 00:33:08.703920 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.704174 kubelet[2558]: E0909 00:33:08.704158 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.704174 kubelet[2558]: W0909 00:33:08.704170 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.704226 kubelet[2558]: E0909 00:33:08.704180 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.704633 kubelet[2558]: E0909 00:33:08.704616 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.704633 kubelet[2558]: W0909 00:33:08.704629 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.704682 kubelet[2558]: E0909 00:33:08.704639 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.704985 kubelet[2558]: E0909 00:33:08.704944 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.704985 kubelet[2558]: W0909 00:33:08.704963 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.704985 kubelet[2558]: E0909 00:33:08.704975 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.705283 kubelet[2558]: E0909 00:33:08.705255 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.705283 kubelet[2558]: W0909 00:33:08.705269 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.705283 kubelet[2558]: E0909 00:33:08.705279 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.706490 kubelet[2558]: E0909 00:33:08.706213 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.706490 kubelet[2558]: W0909 00:33:08.706229 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.706490 kubelet[2558]: E0909 00:33:08.706241 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.707046 kubelet[2558]: E0909 00:33:08.706824 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.707046 kubelet[2558]: W0909 00:33:08.706834 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.707046 kubelet[2558]: E0909 00:33:08.706843 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.707176 kubelet[2558]: E0909 00:33:08.707156 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.707176 kubelet[2558]: W0909 00:33:08.707172 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.707232 kubelet[2558]: E0909 00:33:08.707184 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.707696 kubelet[2558]: E0909 00:33:08.707537 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.707696 kubelet[2558]: W0909 00:33:08.707556 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.707696 kubelet[2558]: E0909 00:33:08.707567 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.708263 kubelet[2558]: E0909 00:33:08.707919 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.708263 kubelet[2558]: W0909 00:33:08.707937 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.708263 kubelet[2558]: E0909 00:33:08.707949 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.709895 kubelet[2558]: E0909 00:33:08.709437 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.709895 kubelet[2558]: W0909 00:33:08.709473 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.709895 kubelet[2558]: E0909 00:33:08.709487 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.709984 containerd[1459]: time="2025-09-09T00:33:08.709934389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-854844fdf6-j2lk5,Uid:36eb4b6c-dfdd-4e4d-b51a-9893d59e4f09,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4c5a7101fef321e650f6aae2b39107bbb39848c886fce3812ad1e4231b1069e\"" Sep 9 00:33:08.710196 kubelet[2558]: E0909 00:33:08.710165 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.710236 kubelet[2558]: W0909 00:33:08.710199 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.710236 kubelet[2558]: E0909 00:33:08.710213 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.710859 kubelet[2558]: E0909 00:33:08.710841 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.710859 kubelet[2558]: W0909 00:33:08.710854 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.710941 kubelet[2558]: E0909 00:33:08.710865 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.712076 kubelet[2558]: E0909 00:33:08.712056 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.712076 kubelet[2558]: W0909 00:33:08.712075 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.712133 kubelet[2558]: E0909 00:33:08.712087 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.713194 kubelet[2558]: E0909 00:33:08.712694 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.713194 kubelet[2558]: W0909 00:33:08.712712 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.713194 kubelet[2558]: E0909 00:33:08.712724 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.713194 kubelet[2558]: E0909 00:33:08.713042 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.713194 kubelet[2558]: W0909 00:33:08.713052 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.713194 kubelet[2558]: E0909 00:33:08.713063 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.715425 kubelet[2558]: E0909 00:33:08.715385 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:08.716681 containerd[1459]: time="2025-09-09T00:33:08.716631019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:33:08.723676 kubelet[2558]: E0909 00:33:08.723635 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:08.723676 kubelet[2558]: W0909 00:33:08.723665 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:08.723676 kubelet[2558]: E0909 00:33:08.723691 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:08.791859 containerd[1459]: time="2025-09-09T00:33:08.791711671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dfdxp,Uid:8cec681f-68ca-430f-93e8-0fe9389fa67b,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:08.824864 containerd[1459]: time="2025-09-09T00:33:08.824739248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:08.825084 containerd[1459]: time="2025-09-09T00:33:08.824842573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:08.825084 containerd[1459]: time="2025-09-09T00:33:08.824858192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:08.825084 containerd[1459]: time="2025-09-09T00:33:08.824969711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:08.851833 systemd[1]: Started cri-containerd-bb3a4e4819265aef242836b183019c1344cf84fa51998e873e544f886bdc69d4.scope - libcontainer container bb3a4e4819265aef242836b183019c1344cf84fa51998e873e544f886bdc69d4. Sep 9 00:33:08.881531 containerd[1459]: time="2025-09-09T00:33:08.881446191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dfdxp,Uid:8cec681f-68ca-430f-93e8-0fe9389fa67b,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb3a4e4819265aef242836b183019c1344cf84fa51998e873e544f886bdc69d4\"" Sep 9 00:33:09.825836 kubelet[2558]: E0909 00:33:09.825773 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mndv" podUID="6bff8f74-70ce-4a82-bbde-261eec1d2cf4" Sep 9 00:33:11.218318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2783885433.mount: Deactivated successfully. Sep 9 00:33:11.609136 containerd[1459]: time="2025-09-09T00:33:11.609047283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:11.610324 containerd[1459]: time="2025-09-09T00:33:11.610183326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 00:33:11.611688 containerd[1459]: time="2025-09-09T00:33:11.611628711Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:11.614119 containerd[1459]: time="2025-09-09T00:33:11.614065627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:11.614745 containerd[1459]: time="2025-09-09T00:33:11.614696823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.898016401s" Sep 9 00:33:11.614745 containerd[1459]: time="2025-09-09T00:33:11.614725476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:33:11.617204 containerd[1459]: time="2025-09-09T00:33:11.617165489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:33:11.639149 containerd[1459]: time="2025-09-09T00:33:11.639092318Z" level=info msg="CreateContainer within sandbox \"a4c5a7101fef321e650f6aae2b39107bbb39848c886fce3812ad1e4231b1069e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:33:11.658206 containerd[1459]: time="2025-09-09T00:33:11.658131453Z" level=info msg="CreateContainer within sandbox \"a4c5a7101fef321e650f6aae2b39107bbb39848c886fce3812ad1e4231b1069e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ff16c9b26d7acf34b7118a5f70d8cb158c699c4ba308f8d3a4b852f0042b1483\"" Sep 9 00:33:11.658946 containerd[1459]: time="2025-09-09T00:33:11.658829746Z" level=info msg="StartContainer for \"ff16c9b26d7acf34b7118a5f70d8cb158c699c4ba308f8d3a4b852f0042b1483\"" Sep 9 00:33:11.694753 systemd[1]: Started cri-containerd-ff16c9b26d7acf34b7118a5f70d8cb158c699c4ba308f8d3a4b852f0042b1483.scope - libcontainer container ff16c9b26d7acf34b7118a5f70d8cb158c699c4ba308f8d3a4b852f0042b1483. Sep 9 00:33:11.747416 containerd[1459]: time="2025-09-09T00:33:11.747366088Z" level=info msg="StartContainer for \"ff16c9b26d7acf34b7118a5f70d8cb158c699c4ba308f8d3a4b852f0042b1483\" returns successfully" Sep 9 00:33:11.841836 kubelet[2558]: E0909 00:33:11.841403 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mndv" podUID="6bff8f74-70ce-4a82-bbde-261eec1d2cf4" Sep 9 00:33:11.912036 kubelet[2558]: E0909 00:33:11.911864 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:11.939136 kubelet[2558]: I0909 00:33:11.938843 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-854844fdf6-j2lk5" podStartSLOduration=2.038066421 podStartE2EDuration="4.938827617s" podCreationTimestamp="2025-09-09 00:33:07 +0000 UTC" firstStartedPulling="2025-09-09 00:33:08.716225197 +0000 UTC m=+20.994350407" lastFinishedPulling="2025-09-09 00:33:11.616986393 +0000 UTC m=+23.895111603" observedRunningTime="2025-09-09 00:33:11.938624786 +0000 UTC m=+24.216749996" watchObservedRunningTime="2025-09-09 00:33:11.938827617 +0000 UTC m=+24.216952827" Sep 9 00:33:11.963677 kubelet[2558]: E0909 00:33:11.963369 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.963677 kubelet[2558]: W0909 00:33:11.963413 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.963677 kubelet[2558]: E0909 00:33:11.963537 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.964790 kubelet[2558]: E0909 00:33:11.964396 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.964790 kubelet[2558]: W0909 00:33:11.964413 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.964790 kubelet[2558]: E0909 00:33:11.964426 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.965254 kubelet[2558]: E0909 00:33:11.965233 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.965254 kubelet[2558]: W0909 00:33:11.965252 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.965337 kubelet[2558]: E0909 00:33:11.965265 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.965634 kubelet[2558]: E0909 00:33:11.965617 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.965634 kubelet[2558]: W0909 00:33:11.965631 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.965718 kubelet[2558]: E0909 00:33:11.965644 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.965909 kubelet[2558]: E0909 00:33:11.965894 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.965909 kubelet[2558]: W0909 00:33:11.965906 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.965990 kubelet[2558]: E0909 00:33:11.965916 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.966180 kubelet[2558]: E0909 00:33:11.966165 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.966180 kubelet[2558]: W0909 00:33:11.966177 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.966234 kubelet[2558]: E0909 00:33:11.966185 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.966424 kubelet[2558]: E0909 00:33:11.966408 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.966424 kubelet[2558]: W0909 00:33:11.966421 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.966525 kubelet[2558]: E0909 00:33:11.966432 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.966694 kubelet[2558]: E0909 00:33:11.966676 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.966694 kubelet[2558]: W0909 00:33:11.966691 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.966786 kubelet[2558]: E0909 00:33:11.966702 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.967034 kubelet[2558]: E0909 00:33:11.966984 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.967034 kubelet[2558]: W0909 00:33:11.967028 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.967151 kubelet[2558]: E0909 00:33:11.967041 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.967454 kubelet[2558]: E0909 00:33:11.967432 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.967454 kubelet[2558]: W0909 00:33:11.967448 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.967626 kubelet[2558]: E0909 00:33:11.967503 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.968436 kubelet[2558]: E0909 00:33:11.968403 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.968436 kubelet[2558]: W0909 00:33:11.968420 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.968436 kubelet[2558]: E0909 00:33:11.968435 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.968732 kubelet[2558]: E0909 00:33:11.968715 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.968732 kubelet[2558]: W0909 00:33:11.968731 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.968811 kubelet[2558]: E0909 00:33:11.968743 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.969234 kubelet[2558]: E0909 00:33:11.969214 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.969234 kubelet[2558]: W0909 00:33:11.969229 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.969367 kubelet[2558]: E0909 00:33:11.969242 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.970514 kubelet[2558]: E0909 00:33:11.970493 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.970514 kubelet[2558]: W0909 00:33:11.970508 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.970590 kubelet[2558]: E0909 00:33:11.970520 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:11.970819 kubelet[2558]: E0909 00:33:11.970798 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:11.970819 kubelet[2558]: W0909 00:33:11.970813 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:11.970894 kubelet[2558]: E0909 00:33:11.970824 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.030499 kubelet[2558]: E0909 00:33:12.030441 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.030499 kubelet[2558]: W0909 00:33:12.030482 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.030499 kubelet[2558]: E0909 00:33:12.030508 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.030761 kubelet[2558]: E0909 00:33:12.030740 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.030761 kubelet[2558]: W0909 00:33:12.030750 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.030761 kubelet[2558]: E0909 00:33:12.030760 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.031071 kubelet[2558]: E0909 00:33:12.031041 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.031071 kubelet[2558]: W0909 00:33:12.031058 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.031166 kubelet[2558]: E0909 00:33:12.031072 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.031325 kubelet[2558]: E0909 00:33:12.031300 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.031325 kubelet[2558]: W0909 00:33:12.031312 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.031325 kubelet[2558]: E0909 00:33:12.031324 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.031616 kubelet[2558]: E0909 00:33:12.031571 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.031616 kubelet[2558]: W0909 00:33:12.031586 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.031616 kubelet[2558]: E0909 00:33:12.031597 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.031852 kubelet[2558]: E0909 00:33:12.031836 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.031852 kubelet[2558]: W0909 00:33:12.031849 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.031934 kubelet[2558]: E0909 00:33:12.031860 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.032345 kubelet[2558]: E0909 00:33:12.032322 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.032345 kubelet[2558]: W0909 00:33:12.032340 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.032431 kubelet[2558]: E0909 00:33:12.032352 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.032756 kubelet[2558]: E0909 00:33:12.032586 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.032756 kubelet[2558]: W0909 00:33:12.032597 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.032756 kubelet[2558]: E0909 00:33:12.032606 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.032874 kubelet[2558]: E0909 00:33:12.032783 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.032874 kubelet[2558]: W0909 00:33:12.032792 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.032874 kubelet[2558]: E0909 00:33:12.032802 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.033141 kubelet[2558]: E0909 00:33:12.033020 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.033141 kubelet[2558]: W0909 00:33:12.033035 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.033141 kubelet[2558]: E0909 00:33:12.033046 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.033497 kubelet[2558]: E0909 00:33:12.033337 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.033497 kubelet[2558]: W0909 00:33:12.033356 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.033497 kubelet[2558]: E0909 00:33:12.033371 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.033767 kubelet[2558]: E0909 00:33:12.033752 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.033987 kubelet[2558]: W0909 00:33:12.033837 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.033987 kubelet[2558]: E0909 00:33:12.033854 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.034217 kubelet[2558]: E0909 00:33:12.034203 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.034652 kubelet[2558]: W0909 00:33:12.034282 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.034652 kubelet[2558]: E0909 00:33:12.034300 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.034872 kubelet[2558]: E0909 00:33:12.034858 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.034960 kubelet[2558]: W0909 00:33:12.034945 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.035025 kubelet[2558]: E0909 00:33:12.035012 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.035432 kubelet[2558]: E0909 00:33:12.035308 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.035554 kubelet[2558]: W0909 00:33:12.035538 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.035640 kubelet[2558]: E0909 00:33:12.035626 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.036173 kubelet[2558]: E0909 00:33:12.036154 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.036173 kubelet[2558]: W0909 00:33:12.036170 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.036287 kubelet[2558]: E0909 00:33:12.036184 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.036512 kubelet[2558]: E0909 00:33:12.036459 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.036512 kubelet[2558]: W0909 00:33:12.036509 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.036596 kubelet[2558]: E0909 00:33:12.036521 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.036913 kubelet[2558]: E0909 00:33:12.036887 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.036913 kubelet[2558]: W0909 00:33:12.036899 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.036913 kubelet[2558]: E0909 00:33:12.036910 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.895542 kubelet[2558]: I0909 00:33:12.895490 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:33:12.896012 kubelet[2558]: E0909 00:33:12.895890 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:12.979703 kubelet[2558]: E0909 00:33:12.979637 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.979703 kubelet[2558]: W0909 00:33:12.979677 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.979703 kubelet[2558]: E0909 00:33:12.979707 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.979993 kubelet[2558]: E0909 00:33:12.979969 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.979993 kubelet[2558]: W0909 00:33:12.979984 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.980043 kubelet[2558]: E0909 00:33:12.979996 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.980290 kubelet[2558]: E0909 00:33:12.980264 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.980290 kubelet[2558]: W0909 00:33:12.980280 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.980454 kubelet[2558]: E0909 00:33:12.980292 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.980830 kubelet[2558]: E0909 00:33:12.980778 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.980830 kubelet[2558]: W0909 00:33:12.980809 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.980830 kubelet[2558]: E0909 00:33:12.980837 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.981105 kubelet[2558]: E0909 00:33:12.981089 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.981140 kubelet[2558]: W0909 00:33:12.981109 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.981140 kubelet[2558]: E0909 00:33:12.981118 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.981342 kubelet[2558]: E0909 00:33:12.981300 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.981342 kubelet[2558]: W0909 00:33:12.981307 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.981342 kubelet[2558]: E0909 00:33:12.981315 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.981585 kubelet[2558]: E0909 00:33:12.981545 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.981585 kubelet[2558]: W0909 00:33:12.981566 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.981585 kubelet[2558]: E0909 00:33:12.981575 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.981806 kubelet[2558]: E0909 00:33:12.981756 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.981806 kubelet[2558]: W0909 00:33:12.981766 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.981806 kubelet[2558]: E0909 00:33:12.981775 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.982091 kubelet[2558]: E0909 00:33:12.982076 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.982091 kubelet[2558]: W0909 00:33:12.982087 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.982163 kubelet[2558]: E0909 00:33:12.982095 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.982292 kubelet[2558]: E0909 00:33:12.982277 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.982292 kubelet[2558]: W0909 00:33:12.982288 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.982366 kubelet[2558]: E0909 00:33:12.982296 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.982478 kubelet[2558]: E0909 00:33:12.982446 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.982478 kubelet[2558]: W0909 00:33:12.982456 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.982550 kubelet[2558]: E0909 00:33:12.982480 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.982682 kubelet[2558]: E0909 00:33:12.982663 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.982714 kubelet[2558]: W0909 00:33:12.982683 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.982714 kubelet[2558]: E0909 00:33:12.982698 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.982944 kubelet[2558]: E0909 00:33:12.982928 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.982944 kubelet[2558]: W0909 00:33:12.982939 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.983010 kubelet[2558]: E0909 00:33:12.982951 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.983146 kubelet[2558]: E0909 00:33:12.983132 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.983146 kubelet[2558]: W0909 00:33:12.983143 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.983193 kubelet[2558]: E0909 00:33:12.983150 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:12.983334 kubelet[2558]: E0909 00:33:12.983321 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:12.983334 kubelet[2558]: W0909 00:33:12.983330 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:12.983397 kubelet[2558]: E0909 00:33:12.983338 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.040294 kubelet[2558]: E0909 00:33:13.040218 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.040294 kubelet[2558]: W0909 00:33:13.040261 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.040294 kubelet[2558]: E0909 00:33:13.040290 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.040569 kubelet[2558]: E0909 00:33:13.040557 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.040569 kubelet[2558]: W0909 00:33:13.040567 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.040688 kubelet[2558]: E0909 00:33:13.040579 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.041188 kubelet[2558]: E0909 00:33:13.041128 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.041188 kubelet[2558]: W0909 00:33:13.041171 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.041276 kubelet[2558]: E0909 00:33:13.041207 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.041732 kubelet[2558]: E0909 00:33:13.041709 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.041732 kubelet[2558]: W0909 00:33:13.041724 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.041846 kubelet[2558]: E0909 00:33:13.041737 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.042028 kubelet[2558]: E0909 00:33:13.041997 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.042028 kubelet[2558]: W0909 00:33:13.042012 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.042028 kubelet[2558]: E0909 00:33:13.042022 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.042279 kubelet[2558]: E0909 00:33:13.042243 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.042279 kubelet[2558]: W0909 00:33:13.042272 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.042369 kubelet[2558]: E0909 00:33:13.042283 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.042602 kubelet[2558]: E0909 00:33:13.042582 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.042602 kubelet[2558]: W0909 00:33:13.042598 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.042682 kubelet[2558]: E0909 00:33:13.042611 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.042944 kubelet[2558]: E0909 00:33:13.042923 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.042944 kubelet[2558]: W0909 00:33:13.042938 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.043028 kubelet[2558]: E0909 00:33:13.042950 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.043354 kubelet[2558]: E0909 00:33:13.043330 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.043354 kubelet[2558]: W0909 00:33:13.043348 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.043426 kubelet[2558]: E0909 00:33:13.043362 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.043683 kubelet[2558]: E0909 00:33:13.043654 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.043683 kubelet[2558]: W0909 00:33:13.043667 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.043683 kubelet[2558]: E0909 00:33:13.043675 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.043893 kubelet[2558]: E0909 00:33:13.043875 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.043893 kubelet[2558]: W0909 00:33:13.043886 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.043893 kubelet[2558]: E0909 00:33:13.043894 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.044109 kubelet[2558]: E0909 00:33:13.044092 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.044109 kubelet[2558]: W0909 00:33:13.044103 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.044109 kubelet[2558]: E0909 00:33:13.044112 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.044350 kubelet[2558]: E0909 00:33:13.044332 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.044350 kubelet[2558]: W0909 00:33:13.044342 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.044350 kubelet[2558]: E0909 00:33:13.044351 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.044668 kubelet[2558]: E0909 00:33:13.044646 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.044668 kubelet[2558]: W0909 00:33:13.044664 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.044745 kubelet[2558]: E0909 00:33:13.044676 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.045064 kubelet[2558]: E0909 00:33:13.045028 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.045064 kubelet[2558]: W0909 00:33:13.045048 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.045064 kubelet[2558]: E0909 00:33:13.045063 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.045369 kubelet[2558]: E0909 00:33:13.045338 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.045369 kubelet[2558]: W0909 00:33:13.045354 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.045369 kubelet[2558]: E0909 00:33:13.045366 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.045769 kubelet[2558]: E0909 00:33:13.045739 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.045769 kubelet[2558]: W0909 00:33:13.045754 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.045769 kubelet[2558]: E0909 00:33:13.045767 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.046026 kubelet[2558]: E0909 00:33:13.046000 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:33:13.046026 kubelet[2558]: W0909 00:33:13.046011 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:33:13.046026 kubelet[2558]: E0909 00:33:13.046020 2558 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:33:13.823037 kubelet[2558]: E0909 00:33:13.822925 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mndv" podUID="6bff8f74-70ce-4a82-bbde-261eec1d2cf4" Sep 9 00:33:13.931092 containerd[1459]: time="2025-09-09T00:33:13.930939933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:13.999039 containerd[1459]: time="2025-09-09T00:33:13.998869921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 00:33:14.086348 containerd[1459]: time="2025-09-09T00:33:14.086065240Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:14.176377 containerd[1459]: time="2025-09-09T00:33:14.176247966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:14.182211 containerd[1459]: time="2025-09-09T00:33:14.182087099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.564861537s" Sep 9 00:33:14.182211 containerd[1459]: time="2025-09-09T00:33:14.182283469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:33:14.196623 containerd[1459]: time="2025-09-09T00:33:14.196483158Z" level=info msg="CreateContainer within sandbox \"bb3a4e4819265aef242836b183019c1344cf84fa51998e873e544f886bdc69d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:33:14.230043 containerd[1459]: time="2025-09-09T00:33:14.229832270Z" level=info msg="CreateContainer within sandbox \"bb3a4e4819265aef242836b183019c1344cf84fa51998e873e544f886bdc69d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"20c26aa53440140e306ea0f4632c821f42662f1c547e8891654fea4b8a3c0c63\"" Sep 9 00:33:14.234755 containerd[1459]: time="2025-09-09T00:33:14.234690752Z" level=info msg="StartContainer for \"20c26aa53440140e306ea0f4632c821f42662f1c547e8891654fea4b8a3c0c63\"" Sep 9 00:33:14.292977 systemd[1]: Started cri-containerd-20c26aa53440140e306ea0f4632c821f42662f1c547e8891654fea4b8a3c0c63.scope - libcontainer container 20c26aa53440140e306ea0f4632c821f42662f1c547e8891654fea4b8a3c0c63. Sep 9 00:33:14.362382 containerd[1459]: time="2025-09-09T00:33:14.362145868Z" level=info msg="StartContainer for \"20c26aa53440140e306ea0f4632c821f42662f1c547e8891654fea4b8a3c0c63\" returns successfully" Sep 9 00:33:14.387036 systemd[1]: cri-containerd-20c26aa53440140e306ea0f4632c821f42662f1c547e8891654fea4b8a3c0c63.scope: Deactivated successfully. Sep 9 00:33:14.440862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20c26aa53440140e306ea0f4632c821f42662f1c547e8891654fea4b8a3c0c63-rootfs.mount: Deactivated successfully. Sep 9 00:33:15.060016 containerd[1459]: time="2025-09-09T00:33:15.057037038Z" level=info msg="shim disconnected" id=20c26aa53440140e306ea0f4632c821f42662f1c547e8891654fea4b8a3c0c63 namespace=k8s.io Sep 9 00:33:15.060016 containerd[1459]: time="2025-09-09T00:33:15.060012795Z" level=warning msg="cleaning up after shim disconnected" id=20c26aa53440140e306ea0f4632c821f42662f1c547e8891654fea4b8a3c0c63 namespace=k8s.io Sep 9 00:33:15.061024 containerd[1459]: time="2025-09-09T00:33:15.060029336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:33:15.822927 kubelet[2558]: E0909 00:33:15.822788 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mndv" podUID="6bff8f74-70ce-4a82-bbde-261eec1d2cf4" Sep 9 00:33:15.907972 containerd[1459]: time="2025-09-09T00:33:15.907901353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:33:17.823202 kubelet[2558]: E0909 00:33:17.823049 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mndv" podUID="6bff8f74-70ce-4a82-bbde-261eec1d2cf4" Sep 9 00:33:19.740339 containerd[1459]: time="2025-09-09T00:33:19.740256245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:19.741494 containerd[1459]: time="2025-09-09T00:33:19.741440828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:33:19.743319 containerd[1459]: time="2025-09-09T00:33:19.743283286Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:19.746147 containerd[1459]: time="2025-09-09T00:33:19.746106144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:19.746751 containerd[1459]: time="2025-09-09T00:33:19.746709807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.838754783s" Sep 9 00:33:19.746786 containerd[1459]: time="2025-09-09T00:33:19.746752888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:33:19.752163 containerd[1459]: time="2025-09-09T00:33:19.752130060Z" level=info msg="CreateContainer within sandbox \"bb3a4e4819265aef242836b183019c1344cf84fa51998e873e544f886bdc69d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:33:19.770164 containerd[1459]: time="2025-09-09T00:33:19.770073636Z" level=info msg="CreateContainer within sandbox \"bb3a4e4819265aef242836b183019c1344cf84fa51998e873e544f886bdc69d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2f30f89443250c7f0b6afcc5b5725d76e81209811b145cabfa52667a0e3ddaf\"" Sep 9 00:33:19.770948 containerd[1459]: time="2025-09-09T00:33:19.770888936Z" level=info msg="StartContainer for \"e2f30f89443250c7f0b6afcc5b5725d76e81209811b145cabfa52667a0e3ddaf\"" Sep 9 00:33:19.817622 systemd[1]: Started cri-containerd-e2f30f89443250c7f0b6afcc5b5725d76e81209811b145cabfa52667a0e3ddaf.scope - libcontainer container e2f30f89443250c7f0b6afcc5b5725d76e81209811b145cabfa52667a0e3ddaf. Sep 9 00:33:19.822803 kubelet[2558]: E0909 00:33:19.822751 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mndv" podUID="6bff8f74-70ce-4a82-bbde-261eec1d2cf4" Sep 9 00:33:20.014699 containerd[1459]: time="2025-09-09T00:33:20.014527266Z" level=info msg="StartContainer for \"e2f30f89443250c7f0b6afcc5b5725d76e81209811b145cabfa52667a0e3ddaf\" returns successfully" Sep 9 00:33:21.482730 systemd[1]: cri-containerd-e2f30f89443250c7f0b6afcc5b5725d76e81209811b145cabfa52667a0e3ddaf.scope: Deactivated successfully. Sep 9 00:33:21.507037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2f30f89443250c7f0b6afcc5b5725d76e81209811b145cabfa52667a0e3ddaf-rootfs.mount: Deactivated successfully. Sep 9 00:33:21.523184 kubelet[2558]: I0909 00:33:21.522838 2558 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:33:21.864130 containerd[1459]: time="2025-09-09T00:33:21.863896094Z" level=info msg="shim disconnected" id=e2f30f89443250c7f0b6afcc5b5725d76e81209811b145cabfa52667a0e3ddaf namespace=k8s.io Sep 9 00:33:21.864130 containerd[1459]: time="2025-09-09T00:33:21.864022541Z" level=warning msg="cleaning up after shim disconnected" id=e2f30f89443250c7f0b6afcc5b5725d76e81209811b145cabfa52667a0e3ddaf namespace=k8s.io Sep 9 00:33:21.864130 containerd[1459]: time="2025-09-09T00:33:21.864036457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:33:21.876390 systemd[1]: Created slice kubepods-burstable-podd385d6d4_50de_4c07_ab5d_80d9b6911449.slice - libcontainer container kubepods-burstable-podd385d6d4_50de_4c07_ab5d_80d9b6911449.slice. Sep 9 00:33:21.887072 systemd[1]: Created slice kubepods-besteffort-pod6bff8f74_70ce_4a82_bbde_261eec1d2cf4.slice - libcontainer container kubepods-besteffort-pod6bff8f74_70ce_4a82_bbde_261eec1d2cf4.slice. Sep 9 00:33:21.892743 containerd[1459]: time="2025-09-09T00:33:21.892551909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mndv,Uid:6bff8f74-70ce-4a82-bbde-261eec1d2cf4,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:21.895613 systemd[1]: Created slice kubepods-besteffort-podba2d26e2_dcec_4788_8eff_95fec48da778.slice - libcontainer container kubepods-besteffort-podba2d26e2_dcec_4788_8eff_95fec48da778.slice. Sep 9 00:33:21.902842 systemd[1]: Created slice kubepods-besteffort-pod8bfccb7d_be38_4d07_befc_a95d6bd2fcf0.slice - libcontainer container kubepods-besteffort-pod8bfccb7d_be38_4d07_befc_a95d6bd2fcf0.slice. Sep 9 00:33:21.910833 systemd[1]: Created slice kubepods-besteffort-pode8ef4708_140d_4ce0_9ba9_5283231b6fd0.slice - libcontainer container kubepods-besteffort-pode8ef4708_140d_4ce0_9ba9_5283231b6fd0.slice. Sep 9 00:33:21.918035 systemd[1]: Created slice kubepods-burstable-podc19eda42_31fe_4d32_886d_b80d09ebda88.slice - libcontainer container kubepods-burstable-podc19eda42_31fe_4d32_886d_b80d09ebda88.slice. Sep 9 00:33:21.923092 systemd[1]: Created slice kubepods-besteffort-podf98696c6_3dc2_424e_80af_9614520b47e8.slice - libcontainer container kubepods-besteffort-podf98696c6_3dc2_424e_80af_9614520b47e8.slice. Sep 9 00:33:22.009261 kubelet[2558]: I0909 00:33:22.009186 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfp6h\" (UniqueName: \"kubernetes.io/projected/f98696c6-3dc2-424e-80af-9614520b47e8-kube-api-access-lfp6h\") pod \"calico-kube-controllers-f84b7d74d-mzdfw\" (UID: \"f98696c6-3dc2-424e-80af-9614520b47e8\") " pod="calico-system/calico-kube-controllers-f84b7d74d-mzdfw" Sep 9 00:33:22.009261 kubelet[2558]: I0909 00:33:22.009239 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d385d6d4-50de-4c07-ab5d-80d9b6911449-config-volume\") pod \"coredns-674b8bbfcf-k5d4c\" (UID: \"d385d6d4-50de-4c07-ab5d-80d9b6911449\") " pod="kube-system/coredns-674b8bbfcf-k5d4c" Sep 9 00:33:22.009261 kubelet[2558]: I0909 00:33:22.009262 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfz8z\" (UniqueName: \"kubernetes.io/projected/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-kube-api-access-sfz8z\") pod \"whisker-7fcccd97b5-2x7ph\" (UID: \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\") " pod="calico-system/whisker-7fcccd97b5-2x7ph" Sep 9 00:33:22.009653 kubelet[2558]: I0909 00:33:22.009323 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rxxs\" (UniqueName: \"kubernetes.io/projected/c19eda42-31fe-4d32-886d-b80d09ebda88-kube-api-access-6rxxs\") pod \"coredns-674b8bbfcf-dvwkz\" (UID: \"c19eda42-31fe-4d32-886d-b80d09ebda88\") " pod="kube-system/coredns-674b8bbfcf-dvwkz" Sep 9 00:33:22.009653 kubelet[2558]: I0909 00:33:22.009421 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-whisker-ca-bundle\") pod \"whisker-7fcccd97b5-2x7ph\" (UID: \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\") " pod="calico-system/whisker-7fcccd97b5-2x7ph" Sep 9 00:33:22.009653 kubelet[2558]: I0909 00:33:22.009452 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8bfccb7d-be38-4d07-befc-a95d6bd2fcf0-calico-apiserver-certs\") pod \"calico-apiserver-54bdbb7f4b-g2fml\" (UID: \"8bfccb7d-be38-4d07-befc-a95d6bd2fcf0\") " pod="calico-apiserver/calico-apiserver-54bdbb7f4b-g2fml" Sep 9 00:33:22.009653 kubelet[2558]: I0909 00:33:22.009493 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f98696c6-3dc2-424e-80af-9614520b47e8-tigera-ca-bundle\") pod \"calico-kube-controllers-f84b7d74d-mzdfw\" (UID: \"f98696c6-3dc2-424e-80af-9614520b47e8\") " pod="calico-system/calico-kube-controllers-f84b7d74d-mzdfw" Sep 9 00:33:22.009653 kubelet[2558]: I0909 00:33:22.009516 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ba2d26e2-dcec-4788-8eff-95fec48da778-calico-apiserver-certs\") pod \"calico-apiserver-54bdbb7f4b-958qh\" (UID: \"ba2d26e2-dcec-4788-8eff-95fec48da778\") " pod="calico-apiserver/calico-apiserver-54bdbb7f4b-958qh" Sep 9 00:33:22.009818 kubelet[2558]: I0909 00:33:22.009533 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c19eda42-31fe-4d32-886d-b80d09ebda88-config-volume\") pod \"coredns-674b8bbfcf-dvwkz\" (UID: \"c19eda42-31fe-4d32-886d-b80d09ebda88\") " pod="kube-system/coredns-674b8bbfcf-dvwkz" Sep 9 00:33:22.009818 kubelet[2558]: I0909 00:33:22.009548 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-whisker-backend-key-pair\") pod \"whisker-7fcccd97b5-2x7ph\" (UID: \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\") " pod="calico-system/whisker-7fcccd97b5-2x7ph" Sep 9 00:33:22.009818 kubelet[2558]: I0909 00:33:22.009564 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqd6\" (UniqueName: \"kubernetes.io/projected/d385d6d4-50de-4c07-ab5d-80d9b6911449-kube-api-access-2rqd6\") pod \"coredns-674b8bbfcf-k5d4c\" (UID: \"d385d6d4-50de-4c07-ab5d-80d9b6911449\") " pod="kube-system/coredns-674b8bbfcf-k5d4c" Sep 9 00:33:22.009818 kubelet[2558]: I0909 00:33:22.009582 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzj5g\" (UniqueName: \"kubernetes.io/projected/ba2d26e2-dcec-4788-8eff-95fec48da778-kube-api-access-xzj5g\") pod \"calico-apiserver-54bdbb7f4b-958qh\" (UID: \"ba2d26e2-dcec-4788-8eff-95fec48da778\") " pod="calico-apiserver/calico-apiserver-54bdbb7f4b-958qh" Sep 9 00:33:22.009818 kubelet[2558]: I0909 00:33:22.009604 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn7fg\" (UniqueName: \"kubernetes.io/projected/8bfccb7d-be38-4d07-befc-a95d6bd2fcf0-kube-api-access-cn7fg\") pod \"calico-apiserver-54bdbb7f4b-g2fml\" (UID: \"8bfccb7d-be38-4d07-befc-a95d6bd2fcf0\") " pod="calico-apiserver/calico-apiserver-54bdbb7f4b-g2fml" Sep 9 00:33:22.068331 systemd[1]: Created slice kubepods-besteffort-poda1fa408b_fdab_450d_b028_22c70c99492b.slice - libcontainer container kubepods-besteffort-poda1fa408b_fdab_450d_b028_22c70c99492b.slice. Sep 9 00:33:22.125025 containerd[1459]: time="2025-09-09T00:33:22.124012041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:33:22.183846 kubelet[2558]: E0909 00:33:22.183793 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:22.184900 containerd[1459]: time="2025-09-09T00:33:22.184841747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k5d4c,Uid:d385d6d4-50de-4c07-ab5d-80d9b6911449,Namespace:kube-system,Attempt:0,}" Sep 9 00:33:22.201937 containerd[1459]: time="2025-09-09T00:33:22.201866834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bdbb7f4b-958qh,Uid:ba2d26e2-dcec-4788-8eff-95fec48da778,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:33:22.205857 containerd[1459]: time="2025-09-09T00:33:22.205811166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bdbb7f4b-g2fml,Uid:8bfccb7d-be38-4d07-befc-a95d6bd2fcf0,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:33:22.211609 kubelet[2558]: I0909 00:33:22.211558 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fa408b-fdab-450d-b028-22c70c99492b-config\") pod \"goldmane-54d579b49d-qv7nj\" (UID: \"a1fa408b-fdab-450d-b028-22c70c99492b\") " pod="calico-system/goldmane-54d579b49d-qv7nj" Sep 9 00:33:22.211609 kubelet[2558]: I0909 00:33:22.211610 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vd7\" (UniqueName: \"kubernetes.io/projected/a1fa408b-fdab-450d-b028-22c70c99492b-kube-api-access-s9vd7\") pod \"goldmane-54d579b49d-qv7nj\" (UID: \"a1fa408b-fdab-450d-b028-22c70c99492b\") " pod="calico-system/goldmane-54d579b49d-qv7nj" Sep 9 00:33:22.211828 kubelet[2558]: I0909 00:33:22.211641 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1fa408b-fdab-450d-b028-22c70c99492b-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-qv7nj\" (UID: \"a1fa408b-fdab-450d-b028-22c70c99492b\") " pod="calico-system/goldmane-54d579b49d-qv7nj" Sep 9 00:33:22.211828 kubelet[2558]: I0909 00:33:22.211661 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a1fa408b-fdab-450d-b028-22c70c99492b-goldmane-key-pair\") pod \"goldmane-54d579b49d-qv7nj\" (UID: \"a1fa408b-fdab-450d-b028-22c70c99492b\") " pod="calico-system/goldmane-54d579b49d-qv7nj" Sep 9 00:33:22.216592 containerd[1459]: time="2025-09-09T00:33:22.216528216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fcccd97b5-2x7ph,Uid:e8ef4708-140d-4ce0-9ba9-5283231b6fd0,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:22.223130 kubelet[2558]: E0909 00:33:22.221229 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:22.223279 containerd[1459]: time="2025-09-09T00:33:22.222121582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvwkz,Uid:c19eda42-31fe-4d32-886d-b80d09ebda88,Namespace:kube-system,Attempt:0,}" Sep 9 00:33:22.227364 containerd[1459]: time="2025-09-09T00:33:22.226620284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f84b7d74d-mzdfw,Uid:f98696c6-3dc2-424e-80af-9614520b47e8,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:22.331957 containerd[1459]: time="2025-09-09T00:33:22.331898126Z" level=error msg="Failed to destroy network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.336227 containerd[1459]: time="2025-09-09T00:33:22.336168540Z" level=error msg="encountered an error cleaning up failed sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.336484 containerd[1459]: time="2025-09-09T00:33:22.336438095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k5d4c,Uid:d385d6d4-50de-4c07-ab5d-80d9b6911449,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.336894 kubelet[2558]: E0909 00:33:22.336845 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.337071 kubelet[2558]: E0909 00:33:22.337053 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k5d4c" Sep 9 00:33:22.337151 kubelet[2558]: E0909 00:33:22.337134 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k5d4c" Sep 9 00:33:22.337627 kubelet[2558]: E0909 00:33:22.337254 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-k5d4c_kube-system(d385d6d4-50de-4c07-ab5d-80d9b6911449)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-k5d4c_kube-system(d385d6d4-50de-4c07-ab5d-80d9b6911449)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k5d4c" podUID="d385d6d4-50de-4c07-ab5d-80d9b6911449" Sep 9 00:33:22.350091 containerd[1459]: time="2025-09-09T00:33:22.349990456Z" level=error msg="Failed to destroy network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.351041 containerd[1459]: time="2025-09-09T00:33:22.350975404Z" level=error msg="encountered an error cleaning up failed sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.351110 containerd[1459]: time="2025-09-09T00:33:22.351086873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mndv,Uid:6bff8f74-70ce-4a82-bbde-261eec1d2cf4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.351561 kubelet[2558]: E0909 00:33:22.351365 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.351561 kubelet[2558]: E0909 00:33:22.351428 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5mndv" Sep 9 00:33:22.351561 kubelet[2558]: E0909 00:33:22.351451 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5mndv" Sep 9 00:33:22.351673 kubelet[2558]: E0909 00:33:22.351519 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5mndv_calico-system(6bff8f74-70ce-4a82-bbde-261eec1d2cf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5mndv_calico-system(6bff8f74-70ce-4a82-bbde-261eec1d2cf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5mndv" podUID="6bff8f74-70ce-4a82-bbde-261eec1d2cf4" Sep 9 00:33:22.360887 containerd[1459]: time="2025-09-09T00:33:22.359915981Z" level=error msg="Failed to destroy network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.360887 containerd[1459]: time="2025-09-09T00:33:22.360660017Z" level=error msg="encountered an error cleaning up failed sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.360887 containerd[1459]: time="2025-09-09T00:33:22.360727413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bdbb7f4b-g2fml,Uid:8bfccb7d-be38-4d07-befc-a95d6bd2fcf0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.361663 kubelet[2558]: E0909 00:33:22.361381 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.361663 kubelet[2558]: E0909 00:33:22.361481 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-g2fml" Sep 9 00:33:22.361663 kubelet[2558]: E0909 00:33:22.361510 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-g2fml" Sep 9 00:33:22.361824 kubelet[2558]: E0909 00:33:22.361574 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bdbb7f4b-g2fml_calico-apiserver(8bfccb7d-be38-4d07-befc-a95d6bd2fcf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bdbb7f4b-g2fml_calico-apiserver(8bfccb7d-be38-4d07-befc-a95d6bd2fcf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-g2fml" podUID="8bfccb7d-be38-4d07-befc-a95d6bd2fcf0" Sep 9 00:33:22.372086 containerd[1459]: time="2025-09-09T00:33:22.371786687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qv7nj,Uid:a1fa408b-fdab-450d-b028-22c70c99492b,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:22.372728 containerd[1459]: time="2025-09-09T00:33:22.372536453Z" level=error msg="Failed to destroy network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.373145 containerd[1459]: time="2025-09-09T00:33:22.373114568Z" level=error msg="encountered an error cleaning up failed sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.373268 containerd[1459]: time="2025-09-09T00:33:22.373241396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bdbb7f4b-958qh,Uid:ba2d26e2-dcec-4788-8eff-95fec48da778,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.373936 kubelet[2558]: E0909 00:33:22.373674 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.373936 kubelet[2558]: E0909 00:33:22.373770 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-958qh" Sep 9 00:33:22.373936 kubelet[2558]: E0909 00:33:22.373817 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-958qh" Sep 9 00:33:22.374091 kubelet[2558]: E0909 00:33:22.373892 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54bdbb7f4b-958qh_calico-apiserver(ba2d26e2-dcec-4788-8eff-95fec48da778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54bdbb7f4b-958qh_calico-apiserver(ba2d26e2-dcec-4788-8eff-95fec48da778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-958qh" podUID="ba2d26e2-dcec-4788-8eff-95fec48da778" Sep 9 00:33:22.377761 containerd[1459]: time="2025-09-09T00:33:22.377600916Z" level=error msg="Failed to destroy network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.378075 containerd[1459]: time="2025-09-09T00:33:22.378043878Z" level=error msg="encountered an error cleaning up failed sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.378154 containerd[1459]: time="2025-09-09T00:33:22.378104772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvwkz,Uid:c19eda42-31fe-4d32-886d-b80d09ebda88,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.378543 kubelet[2558]: E0909 00:33:22.378382 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.378627 kubelet[2558]: E0909 00:33:22.378603 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvwkz" Sep 9 00:33:22.378683 kubelet[2558]: E0909 00:33:22.378636 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvwkz" Sep 9 00:33:22.378899 kubelet[2558]: E0909 00:33:22.378776 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dvwkz_kube-system(c19eda42-31fe-4d32-886d-b80d09ebda88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dvwkz_kube-system(c19eda42-31fe-4d32-886d-b80d09ebda88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dvwkz" podUID="c19eda42-31fe-4d32-886d-b80d09ebda88" Sep 9 00:33:22.380487 containerd[1459]: time="2025-09-09T00:33:22.380440595Z" level=error msg="Failed to destroy network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.380776 containerd[1459]: time="2025-09-09T00:33:22.380747621Z" level=error msg="encountered an error cleaning up failed sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.380809 containerd[1459]: time="2025-09-09T00:33:22.380792145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fcccd97b5-2x7ph,Uid:e8ef4708-140d-4ce0-9ba9-5283231b6fd0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.380954 kubelet[2558]: E0909 00:33:22.380920 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:22.381005 kubelet[2558]: E0909 00:33:22.380953 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fcccd97b5-2x7ph" Sep 9 00:33:22.381005 kubelet[2558]: E0909 00:33:22.380973 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fcccd97b5-2x7ph" Sep 9 00:33:22.381074 kubelet[2558]: E0909 00:33:22.381008 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fcccd97b5-2x7ph_calico-system(e8ef4708-140d-4ce0-9ba9-5283231b6fd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fcccd97b5-2x7ph_calico-system(e8ef4708-140d-4ce0-9ba9-5283231b6fd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fcccd97b5-2x7ph" podUID="e8ef4708-140d-4ce0-9ba9-5283231b6fd0" Sep 9 00:33:23.124960 kubelet[2558]: I0909 00:33:23.124918 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:23.125739 kubelet[2558]: I0909 00:33:23.125718 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:23.126900 kubelet[2558]: I0909 00:33:23.126878 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:23.127803 kubelet[2558]: I0909 00:33:23.127769 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:23.128075 containerd[1459]: time="2025-09-09T00:33:23.127794129Z" level=info msg="StopPodSandbox for \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\"" Sep 9 00:33:23.128075 containerd[1459]: time="2025-09-09T00:33:23.127802355Z" level=info msg="StopPodSandbox for \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\"" Sep 9 00:33:23.128438 containerd[1459]: time="2025-09-09T00:33:23.128210761Z" level=info msg="StopPodSandbox for \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\"" Sep 9 00:33:23.129063 containerd[1459]: time="2025-09-09T00:33:23.129042702Z" level=info msg="StopPodSandbox for \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\"" Sep 9 00:33:23.129887 containerd[1459]: time="2025-09-09T00:33:23.129851309Z" level=info msg="Ensure that sandbox 5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1 in task-service has been cleanup successfully" Sep 9 00:33:23.129887 containerd[1459]: time="2025-09-09T00:33:23.129864524Z" level=info msg="Ensure that sandbox f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995 in task-service has been cleanup successfully" Sep 9 00:33:23.130282 containerd[1459]: time="2025-09-09T00:33:23.129869874Z" level=info msg="Ensure that sandbox c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a in task-service has been cleanup successfully" Sep 9 00:33:23.130538 containerd[1459]: time="2025-09-09T00:33:23.129870576Z" level=info msg="Ensure that sandbox 43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6 in task-service has been cleanup successfully" Sep 9 00:33:23.132347 kubelet[2558]: I0909 00:33:23.132312 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:23.133134 containerd[1459]: time="2025-09-09T00:33:23.133102360Z" level=info msg="StopPodSandbox for \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\"" Sep 9 00:33:23.133893 containerd[1459]: time="2025-09-09T00:33:23.133863699Z" level=info msg="Ensure that sandbox 5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed in task-service has been cleanup successfully" Sep 9 00:33:23.135002 kubelet[2558]: I0909 00:33:23.134973 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:23.136840 containerd[1459]: time="2025-09-09T00:33:23.136674974Z" level=info msg="StopPodSandbox for \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\"" Sep 9 00:33:23.136914 containerd[1459]: time="2025-09-09T00:33:23.136876301Z" level=info msg="Ensure that sandbox f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec in task-service has been cleanup successfully" Sep 9 00:33:23.164689 containerd[1459]: time="2025-09-09T00:33:23.164618577Z" level=error msg="StopPodSandbox for \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\" failed" error="failed to destroy network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.165499 kubelet[2558]: E0909 00:33:23.164971 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:23.165499 kubelet[2558]: E0909 00:33:23.165071 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995"} Sep 9 00:33:23.165499 kubelet[2558]: E0909 00:33:23.165149 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c19eda42-31fe-4d32-886d-b80d09ebda88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:23.165499 kubelet[2558]: E0909 00:33:23.165191 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c19eda42-31fe-4d32-886d-b80d09ebda88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dvwkz" podUID="c19eda42-31fe-4d32-886d-b80d09ebda88" Sep 9 00:33:23.176948 containerd[1459]: time="2025-09-09T00:33:23.176887558Z" level=error msg="StopPodSandbox for \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\" failed" error="failed to destroy network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.177766 kubelet[2558]: E0909 00:33:23.177638 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:23.177766 kubelet[2558]: E0909 00:33:23.177716 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6"} Sep 9 00:33:23.177843 kubelet[2558]: E0909 00:33:23.177768 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d385d6d4-50de-4c07-ab5d-80d9b6911449\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:23.177843 kubelet[2558]: E0909 00:33:23.177814 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d385d6d4-50de-4c07-ab5d-80d9b6911449\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k5d4c" podUID="d385d6d4-50de-4c07-ab5d-80d9b6911449" Sep 9 00:33:23.178660 containerd[1459]: time="2025-09-09T00:33:23.178574433Z" level=error msg="StopPodSandbox for \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\" failed" error="failed to destroy network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.178817 kubelet[2558]: E0909 00:33:23.178751 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:23.178870 kubelet[2558]: E0909 00:33:23.178825 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1"} Sep 9 00:33:23.178870 kubelet[2558]: E0909 00:33:23.178857 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6bff8f74-70ce-4a82-bbde-261eec1d2cf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:23.178955 kubelet[2558]: E0909 00:33:23.178883 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6bff8f74-70ce-4a82-bbde-261eec1d2cf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5mndv" podUID="6bff8f74-70ce-4a82-bbde-261eec1d2cf4" Sep 9 00:33:23.181510 containerd[1459]: time="2025-09-09T00:33:23.181355472Z" level=error msg="StopPodSandbox for \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\" failed" error="failed to destroy network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.182503 kubelet[2558]: E0909 00:33:23.181650 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:23.182503 kubelet[2558]: E0909 00:33:23.181749 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a"} Sep 9 00:33:23.182503 kubelet[2558]: E0909 00:33:23.181781 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:23.182503 kubelet[2558]: E0909 00:33:23.181808 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fcccd97b5-2x7ph" podUID="e8ef4708-140d-4ce0-9ba9-5283231b6fd0" Sep 9 00:33:23.188198 containerd[1459]: time="2025-09-09T00:33:23.188144671Z" level=error msg="StopPodSandbox for \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\" failed" error="failed to destroy network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.188485 kubelet[2558]: E0909 00:33:23.188419 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:23.188543 kubelet[2558]: E0909 00:33:23.188501 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec"} Sep 9 00:33:23.188575 kubelet[2558]: E0909 00:33:23.188554 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba2d26e2-dcec-4788-8eff-95fec48da778\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:23.188644 kubelet[2558]: E0909 00:33:23.188585 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba2d26e2-dcec-4788-8eff-95fec48da778\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-958qh" podUID="ba2d26e2-dcec-4788-8eff-95fec48da778" Sep 9 00:33:23.197279 containerd[1459]: time="2025-09-09T00:33:23.197078414Z" level=error msg="StopPodSandbox for \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\" failed" error="failed to destroy network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.197361 kubelet[2558]: E0909 00:33:23.197284 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:23.197361 kubelet[2558]: E0909 00:33:23.197355 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed"} Sep 9 00:33:23.197480 kubelet[2558]: E0909 00:33:23.197389 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8bfccb7d-be38-4d07-befc-a95d6bd2fcf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:23.197480 kubelet[2558]: E0909 00:33:23.197422 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8bfccb7d-be38-4d07-befc-a95d6bd2fcf0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-g2fml" podUID="8bfccb7d-be38-4d07-befc-a95d6bd2fcf0" Sep 9 00:33:23.623426 containerd[1459]: time="2025-09-09T00:33:23.623369995Z" level=error msg="Failed to destroy network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.623826 containerd[1459]: time="2025-09-09T00:33:23.623802315Z" level=error msg="encountered an error cleaning up failed sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.623884 containerd[1459]: time="2025-09-09T00:33:23.623847810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qv7nj,Uid:a1fa408b-fdab-450d-b028-22c70c99492b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.624140 kubelet[2558]: E0909 00:33:23.624078 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.624235 kubelet[2558]: E0909 00:33:23.624147 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qv7nj" Sep 9 00:33:23.624235 kubelet[2558]: E0909 00:33:23.624172 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-qv7nj" Sep 9 00:33:23.624454 kubelet[2558]: E0909 00:33:23.624254 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-qv7nj_calico-system(a1fa408b-fdab-450d-b028-22c70c99492b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-qv7nj_calico-system(a1fa408b-fdab-450d-b028-22c70c99492b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-qv7nj" podUID="a1fa408b-fdab-450d-b028-22c70c99492b" Sep 9 00:33:23.626604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2-shm.mount: Deactivated successfully. Sep 9 00:33:23.655149 containerd[1459]: time="2025-09-09T00:33:23.655071719Z" level=error msg="Failed to destroy network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.655612 containerd[1459]: time="2025-09-09T00:33:23.655572168Z" level=error msg="encountered an error cleaning up failed sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.655657 containerd[1459]: time="2025-09-09T00:33:23.655636028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f84b7d74d-mzdfw,Uid:f98696c6-3dc2-424e-80af-9614520b47e8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.655997 kubelet[2558]: E0909 00:33:23.655917 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:23.656150 kubelet[2558]: E0909 00:33:23.656103 2558 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f84b7d74d-mzdfw" Sep 9 00:33:23.656211 kubelet[2558]: E0909 00:33:23.656153 2558 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f84b7d74d-mzdfw" Sep 9 00:33:23.656245 kubelet[2558]: E0909 00:33:23.656220 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f84b7d74d-mzdfw_calico-system(f98696c6-3dc2-424e-80af-9614520b47e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f84b7d74d-mzdfw_calico-system(f98696c6-3dc2-424e-80af-9614520b47e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f84b7d74d-mzdfw" podUID="f98696c6-3dc2-424e-80af-9614520b47e8" Sep 9 00:33:23.658155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a-shm.mount: Deactivated successfully. Sep 9 00:33:24.147899 kubelet[2558]: I0909 00:33:24.147863 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:24.150400 containerd[1459]: time="2025-09-09T00:33:24.150348729Z" level=info msg="StopPodSandbox for \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\"" Sep 9 00:33:24.153216 kubelet[2558]: I0909 00:33:24.152935 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:24.155756 containerd[1459]: time="2025-09-09T00:33:24.155708957Z" level=info msg="StopPodSandbox for \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\"" Sep 9 00:33:24.155953 containerd[1459]: time="2025-09-09T00:33:24.155930933Z" level=info msg="Ensure that sandbox a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a in task-service has been cleanup successfully" Sep 9 00:33:24.171241 containerd[1459]: time="2025-09-09T00:33:24.171188390Z" level=info msg="Ensure that sandbox f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2 in task-service has been cleanup successfully" Sep 9 00:33:24.208388 containerd[1459]: time="2025-09-09T00:33:24.208340129Z" level=error msg="StopPodSandbox for \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\" failed" error="failed to destroy network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:24.208997 kubelet[2558]: E0909 00:33:24.208833 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:24.208997 kubelet[2558]: E0909 00:33:24.208898 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a"} Sep 9 00:33:24.209185 kubelet[2558]: E0909 00:33:24.209121 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f98696c6-3dc2-424e-80af-9614520b47e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:24.209185 kubelet[2558]: E0909 00:33:24.209155 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f98696c6-3dc2-424e-80af-9614520b47e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f84b7d74d-mzdfw" podUID="f98696c6-3dc2-424e-80af-9614520b47e8" Sep 9 00:33:24.213973 containerd[1459]: time="2025-09-09T00:33:24.213888589Z" level=error msg="StopPodSandbox for \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\" failed" error="failed to destroy network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:24.214262 kubelet[2558]: E0909 00:33:24.214183 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:24.214306 kubelet[2558]: E0909 00:33:24.214281 2558 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2"} Sep 9 00:33:24.214352 kubelet[2558]: E0909 00:33:24.214327 2558 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1fa408b-fdab-450d-b028-22c70c99492b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:24.214397 kubelet[2558]: E0909 00:33:24.214369 2558 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1fa408b-fdab-450d-b028-22c70c99492b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-qv7nj" podUID="a1fa408b-fdab-450d-b028-22c70c99492b" Sep 9 00:33:28.035276 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:33068.service - OpenSSH per-connection server daemon (10.0.0.1:33068). Sep 9 00:33:28.097764 sshd[3813]: Accepted publickey for core from 10.0.0.1 port 33068 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:33:28.101741 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:28.107224 systemd-logind[1440]: New session 10 of user core. Sep 9 00:33:28.113798 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:33:28.267163 sshd[3813]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:28.272239 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:33068.service: Deactivated successfully. Sep 9 00:33:28.275150 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:33:28.276132 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:33:28.277282 systemd-logind[1440]: Removed session 10. Sep 9 00:33:29.580071 kubelet[2558]: I0909 00:33:29.579996 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:33:29.580694 kubelet[2558]: E0909 00:33:29.580412 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:30.165831 kubelet[2558]: E0909 00:33:30.165734 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:32.061261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122981024.mount: Deactivated successfully. Sep 9 00:33:33.280359 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:59832.service - OpenSSH per-connection server daemon (10.0.0.1:59832). Sep 9 00:33:34.137236 containerd[1459]: time="2025-09-09T00:33:34.137157065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:34.138757 containerd[1459]: time="2025-09-09T00:33:34.138694094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:33:34.140665 containerd[1459]: time="2025-09-09T00:33:34.140633057Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:34.145103 containerd[1459]: time="2025-09-09T00:33:34.145049947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:34.145885 containerd[1459]: time="2025-09-09T00:33:34.145807129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 12.021739813s" Sep 9 00:33:34.145885 containerd[1459]: time="2025-09-09T00:33:34.145859420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:33:34.160106 containerd[1459]: time="2025-09-09T00:33:34.160057247Z" level=info msg="CreateContainer within sandbox \"bb3a4e4819265aef242836b183019c1344cf84fa51998e873e544f886bdc69d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:33:34.172925 sshd[3834]: Accepted publickey for core from 10.0.0.1 port 59832 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:33:34.175242 sshd[3834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:34.181718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553326621.mount: Deactivated successfully. Sep 9 00:33:34.185159 containerd[1459]: time="2025-09-09T00:33:34.185095145Z" level=info msg="CreateContainer within sandbox \"bb3a4e4819265aef242836b183019c1344cf84fa51998e873e544f886bdc69d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cd000d139af53b0473d180ffb3e579d1fd22e65d602688582a5b070d83c652ae\"" Sep 9 00:33:34.188033 containerd[1459]: time="2025-09-09T00:33:34.186092331Z" level=info msg="StartContainer for \"cd000d139af53b0473d180ffb3e579d1fd22e65d602688582a5b070d83c652ae\"" Sep 9 00:33:34.190481 systemd-logind[1440]: New session 11 of user core. Sep 9 00:33:34.194643 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:33:34.250931 systemd[1]: Started cri-containerd-cd000d139af53b0473d180ffb3e579d1fd22e65d602688582a5b070d83c652ae.scope - libcontainer container cd000d139af53b0473d180ffb3e579d1fd22e65d602688582a5b070d83c652ae. Sep 9 00:33:34.499386 sshd[3834]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:34.503501 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:59832.service: Deactivated successfully. Sep 9 00:33:34.505848 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:33:34.506707 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:33:34.507733 systemd-logind[1440]: Removed session 11. Sep 9 00:33:34.946372 containerd[1459]: time="2025-09-09T00:33:34.945916077Z" level=info msg="StartContainer for \"cd000d139af53b0473d180ffb3e579d1fd22e65d602688582a5b070d83c652ae\" returns successfully" Sep 9 00:33:34.960169 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:33:34.961059 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:33:35.049234 containerd[1459]: time="2025-09-09T00:33:35.049181460Z" level=info msg="StopPodSandbox for \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\"" Sep 9 00:33:35.223769 kubelet[2558]: I0909 00:33:35.223560 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dfdxp" podStartSLOduration=2.959375456 podStartE2EDuration="28.22353487s" podCreationTimestamp="2025-09-09 00:33:07 +0000 UTC" firstStartedPulling="2025-09-09 00:33:08.882705998 +0000 UTC m=+21.160831208" lastFinishedPulling="2025-09-09 00:33:34.146865412 +0000 UTC m=+46.424990622" observedRunningTime="2025-09-09 00:33:35.223294666 +0000 UTC m=+47.501419876" watchObservedRunningTime="2025-09-09 00:33:35.22353487 +0000 UTC m=+47.501660080" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.371 [INFO][3914] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.372 [INFO][3914] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" iface="eth0" netns="/var/run/netns/cni-7925b0ce-434e-9a5d-21f4-9faa8affdf27" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.372 [INFO][3914] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" iface="eth0" netns="/var/run/netns/cni-7925b0ce-434e-9a5d-21f4-9faa8affdf27" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.373 [INFO][3914] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" iface="eth0" netns="/var/run/netns/cni-7925b0ce-434e-9a5d-21f4-9faa8affdf27" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.373 [INFO][3914] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.373 [INFO][3914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.440 [INFO][3948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" HandleID="k8s-pod-network.c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Workload="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.440 [INFO][3948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.440 [INFO][3948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.541 [WARNING][3948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" HandleID="k8s-pod-network.c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Workload="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.541 [INFO][3948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" HandleID="k8s-pod-network.c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Workload="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.543 [INFO][3948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:35.550510 containerd[1459]: 2025-09-09 00:33:35.547 [INFO][3914] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:35.551737 containerd[1459]: time="2025-09-09T00:33:35.550618738Z" level=info msg="TearDown network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\" successfully" Sep 9 00:33:35.551737 containerd[1459]: time="2025-09-09T00:33:35.550653476Z" level=info msg="StopPodSandbox for \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\" returns successfully" Sep 9 00:33:35.553541 systemd[1]: run-netns-cni\x2d7925b0ce\x2d434e\x2d9a5d\x2d21f4\x2d9faa8affdf27.mount: Deactivated successfully. Sep 9 00:33:35.706784 kubelet[2558]: I0909 00:33:35.706694 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e8ef4708-140d-4ce0-9ba9-5283231b6fd0" (UID: "e8ef4708-140d-4ce0-9ba9-5283231b6fd0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:33:35.707017 kubelet[2558]: I0909 00:33:35.706783 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-whisker-ca-bundle\") pod \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\" (UID: \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\") " Sep 9 00:33:35.707017 kubelet[2558]: I0909 00:33:35.706841 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-whisker-backend-key-pair\") pod \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\" (UID: \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\") " Sep 9 00:33:35.707017 kubelet[2558]: I0909 00:33:35.706891 2558 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfz8z\" (UniqueName: \"kubernetes.io/projected/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-kube-api-access-sfz8z\") pod \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\" (UID: \"e8ef4708-140d-4ce0-9ba9-5283231b6fd0\") " Sep 9 00:33:35.707017 kubelet[2558]: I0909 00:33:35.706958 2558 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:33:35.710795 kubelet[2558]: I0909 00:33:35.710752 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e8ef4708-140d-4ce0-9ba9-5283231b6fd0" (UID: "e8ef4708-140d-4ce0-9ba9-5283231b6fd0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:33:35.710970 kubelet[2558]: I0909 00:33:35.710876 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-kube-api-access-sfz8z" (OuterVolumeSpecName: "kube-api-access-sfz8z") pod "e8ef4708-140d-4ce0-9ba9-5283231b6fd0" (UID: "e8ef4708-140d-4ce0-9ba9-5283231b6fd0"). InnerVolumeSpecName "kube-api-access-sfz8z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:33:35.712903 systemd[1]: var-lib-kubelet-pods-e8ef4708\x2d140d\x2d4ce0\x2d9ba9\x2d5283231b6fd0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsfz8z.mount: Deactivated successfully. Sep 9 00:33:35.713034 systemd[1]: var-lib-kubelet-pods-e8ef4708\x2d140d\x2d4ce0\x2d9ba9\x2d5283231b6fd0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:33:35.807859 kubelet[2558]: I0909 00:33:35.807690 2558 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:33:35.807859 kubelet[2558]: I0909 00:33:35.807741 2558 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sfz8z\" (UniqueName: \"kubernetes.io/projected/e8ef4708-140d-4ce0-9ba9-5283231b6fd0-kube-api-access-sfz8z\") on node \"localhost\" DevicePath \"\"" Sep 9 00:33:35.824679 containerd[1459]: time="2025-09-09T00:33:35.823855652Z" level=info msg="StopPodSandbox for \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\"" Sep 9 00:33:35.834052 systemd[1]: Removed slice kubepods-besteffort-pode8ef4708_140d_4ce0_9ba9_5283231b6fd0.slice - libcontainer container kubepods-besteffort-pode8ef4708_140d_4ce0_9ba9_5283231b6fd0.slice. Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.899 [INFO][3979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.900 [INFO][3979] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" iface="eth0" netns="/var/run/netns/cni-855ba669-b513-624f-d3e1-6b287c091950" Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.900 [INFO][3979] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" iface="eth0" netns="/var/run/netns/cni-855ba669-b513-624f-d3e1-6b287c091950" Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.901 [INFO][3979] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" iface="eth0" netns="/var/run/netns/cni-855ba669-b513-624f-d3e1-6b287c091950" Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.901 [INFO][3979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.901 [INFO][3979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.931 [INFO][3989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" HandleID="k8s-pod-network.f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.931 [INFO][3989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.931 [INFO][3989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.940 [WARNING][3989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" HandleID="k8s-pod-network.f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.940 [INFO][3989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" HandleID="k8s-pod-network.f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.944 [INFO][3989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:35.951783 containerd[1459]: 2025-09-09 00:33:35.948 [INFO][3979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:35.953286 containerd[1459]: time="2025-09-09T00:33:35.953230075Z" level=info msg="TearDown network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\" successfully" Sep 9 00:33:35.953286 containerd[1459]: time="2025-09-09T00:33:35.953271987Z" level=info msg="StopPodSandbox for \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\" returns successfully" Sep 9 00:33:35.953885 kubelet[2558]: E0909 00:33:35.953836 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:35.955703 containerd[1459]: time="2025-09-09T00:33:35.955646949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvwkz,Uid:c19eda42-31fe-4d32-886d-b80d09ebda88,Namespace:kube-system,Attempt:1,}" Sep 9 00:33:35.956228 systemd[1]: run-netns-cni\x2d855ba669\x2db513\x2d624f\x2dd3e1\x2d6b287c091950.mount: Deactivated successfully. Sep 9 00:33:36.118565 systemd-networkd[1388]: cali3c850b68f2e: Link UP Sep 9 00:33:36.118847 systemd-networkd[1388]: cali3c850b68f2e: Gained carrier Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.001 [INFO][3998] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.014 [INFO][3998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0 coredns-674b8bbfcf- kube-system c19eda42-31fe-4d32-886d-b80d09ebda88 993 0 2025-09-09 00:32:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dvwkz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c850b68f2e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvwkz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvwkz-" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.014 [INFO][3998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvwkz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.040 [INFO][4011] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" HandleID="k8s-pod-network.01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.040 [INFO][4011] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" HandleID="k8s-pod-network.01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000440180), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dvwkz", "timestamp":"2025-09-09 00:33:36.040585406 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.040 [INFO][4011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.040 [INFO][4011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.041 [INFO][4011] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.048 [INFO][4011] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" host="localhost" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.086 [INFO][4011] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.091 [INFO][4011] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.093 [INFO][4011] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.095 [INFO][4011] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.096 [INFO][4011] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" host="localhost" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.097 [INFO][4011] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8 Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.101 [INFO][4011] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" host="localhost" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.106 [INFO][4011] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" host="localhost" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.106 [INFO][4011] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" host="localhost" Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.106 [INFO][4011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:36.137341 containerd[1459]: 2025-09-09 00:33:36.107 [INFO][4011] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" HandleID="k8s-pod-network.01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:36.138263 containerd[1459]: 2025-09-09 00:33:36.110 [INFO][3998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvwkz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c19eda42-31fe-4d32-886d-b80d09ebda88", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dvwkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c850b68f2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:36.138263 containerd[1459]: 2025-09-09 00:33:36.110 [INFO][3998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvwkz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:36.138263 containerd[1459]: 2025-09-09 00:33:36.110 [INFO][3998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c850b68f2e ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvwkz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:36.138263 containerd[1459]: 2025-09-09 00:33:36.119 [INFO][3998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvwkz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:36.138263 containerd[1459]: 2025-09-09 00:33:36.119 [INFO][3998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvwkz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c19eda42-31fe-4d32-886d-b80d09ebda88", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8", Pod:"coredns-674b8bbfcf-dvwkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c850b68f2e", MAC:"4e:24:43:59:8b:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:36.138263 containerd[1459]: 2025-09-09 00:33:36.133 [INFO][3998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvwkz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:36.204337 containerd[1459]: time="2025-09-09T00:33:36.204220078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:36.204337 containerd[1459]: time="2025-09-09T00:33:36.204287157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:36.204337 containerd[1459]: time="2025-09-09T00:33:36.204301185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:36.204704 containerd[1459]: time="2025-09-09T00:33:36.204407931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:36.243838 systemd[1]: Started cri-containerd-01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8.scope - libcontainer container 01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8. Sep 9 00:33:36.267736 systemd[1]: Created slice kubepods-besteffort-podb701252e_da97_40f7_ade2_d04596a03c3b.slice - libcontainer container kubepods-besteffort-podb701252e_da97_40f7_ade2_d04596a03c3b.slice. Sep 9 00:33:36.274714 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:36.312686 containerd[1459]: time="2025-09-09T00:33:36.312600011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvwkz,Uid:c19eda42-31fe-4d32-886d-b80d09ebda88,Namespace:kube-system,Attempt:1,} returns sandbox id \"01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8\"" Sep 9 00:33:36.313687 kubelet[2558]: E0909 00:33:36.313646 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:36.411483 kubelet[2558]: I0909 00:33:36.411256 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b701252e-da97-40f7-ade2-d04596a03c3b-whisker-backend-key-pair\") pod \"whisker-679f97cfbd-8pkmh\" (UID: \"b701252e-da97-40f7-ade2-d04596a03c3b\") " pod="calico-system/whisker-679f97cfbd-8pkmh" Sep 9 00:33:36.411483 kubelet[2558]: I0909 00:33:36.411328 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b701252e-da97-40f7-ade2-d04596a03c3b-whisker-ca-bundle\") pod \"whisker-679f97cfbd-8pkmh\" (UID: \"b701252e-da97-40f7-ade2-d04596a03c3b\") " pod="calico-system/whisker-679f97cfbd-8pkmh" Sep 9 00:33:36.411483 kubelet[2558]: I0909 00:33:36.411355 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrvmt\" (UniqueName: \"kubernetes.io/projected/b701252e-da97-40f7-ade2-d04596a03c3b-kube-api-access-wrvmt\") pod \"whisker-679f97cfbd-8pkmh\" (UID: \"b701252e-da97-40f7-ade2-d04596a03c3b\") " pod="calico-system/whisker-679f97cfbd-8pkmh" Sep 9 00:33:36.519314 containerd[1459]: time="2025-09-09T00:33:36.519268692Z" level=info msg="CreateContainer within sandbox \"01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:33:36.730089 containerd[1459]: time="2025-09-09T00:33:36.730031617Z" level=info msg="CreateContainer within sandbox \"01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35f3957950811128eb72f6f226d61c4cde578a953b491e337a9fd1e722e7d990\"" Sep 9 00:33:36.731078 containerd[1459]: time="2025-09-09T00:33:36.731014652Z" level=info msg="StartContainer for \"35f3957950811128eb72f6f226d61c4cde578a953b491e337a9fd1e722e7d990\"" Sep 9 00:33:36.778737 systemd[1]: Started cri-containerd-35f3957950811128eb72f6f226d61c4cde578a953b491e337a9fd1e722e7d990.scope - libcontainer container 35f3957950811128eb72f6f226d61c4cde578a953b491e337a9fd1e722e7d990. Sep 9 00:33:36.823840 containerd[1459]: time="2025-09-09T00:33:36.823594673Z" level=info msg="StopPodSandbox for \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\"" Sep 9 00:33:36.828721 containerd[1459]: time="2025-09-09T00:33:36.828671692Z" level=info msg="StartContainer for \"35f3957950811128eb72f6f226d61c4cde578a953b491e337a9fd1e722e7d990\" returns successfully" Sep 9 00:33:36.874984 containerd[1459]: time="2025-09-09T00:33:36.874916851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679f97cfbd-8pkmh,Uid:b701252e-da97-40f7-ade2-d04596a03c3b,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.917 [INFO][4228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.917 [INFO][4228] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" iface="eth0" netns="/var/run/netns/cni-4b493069-2208-7c85-e4f5-5c065ca98776" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.918 [INFO][4228] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" iface="eth0" netns="/var/run/netns/cni-4b493069-2208-7c85-e4f5-5c065ca98776" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.918 [INFO][4228] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" iface="eth0" netns="/var/run/netns/cni-4b493069-2208-7c85-e4f5-5c065ca98776" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.918 [INFO][4228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.918 [INFO][4228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.958 [INFO][4243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" HandleID="k8s-pod-network.5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.959 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.959 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.968 [WARNING][4243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" HandleID="k8s-pod-network.5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.968 [INFO][4243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" HandleID="k8s-pod-network.5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.970 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:36.978556 containerd[1459]: 2025-09-09 00:33:36.973 [INFO][4228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:36.979060 containerd[1459]: time="2025-09-09T00:33:36.978874037Z" level=info msg="TearDown network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\" successfully" Sep 9 00:33:36.979060 containerd[1459]: time="2025-09-09T00:33:36.978917000Z" level=info msg="StopPodSandbox for \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\" returns successfully" Sep 9 00:33:36.979904 containerd[1459]: time="2025-09-09T00:33:36.979815843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bdbb7f4b-g2fml,Uid:8bfccb7d-be38-4d07-befc-a95d6bd2fcf0,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:33:37.161097 systemd[1]: run-netns-cni\x2d4b493069\x2d2208\x2d7c85\x2de4f5\x2d5c065ca98776.mount: Deactivated successfully. Sep 9 00:33:37.186539 kubelet[2558]: E0909 00:33:37.186452 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:37.216508 kernel: bpftool[4312]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 00:33:37.248339 kubelet[2558]: I0909 00:33:37.248246 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dvwkz" podStartSLOduration=44.24822644 podStartE2EDuration="44.24822644s" podCreationTimestamp="2025-09-09 00:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:33:37.248087813 +0000 UTC m=+49.526213023" watchObservedRunningTime="2025-09-09 00:33:37.24822644 +0000 UTC m=+49.526351640" Sep 9 00:33:37.309669 systemd-networkd[1388]: cali1569974aa7d: Link UP Sep 9 00:33:37.311372 systemd-networkd[1388]: cali1569974aa7d: Gained carrier Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.036 [INFO][4270] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.141 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--679f97cfbd--8pkmh-eth0 whisker-679f97cfbd- calico-system b701252e-da97-40f7-ade2-d04596a03c3b 1014 0 2025-09-09 00:33:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:679f97cfbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-679f97cfbd-8pkmh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1569974aa7d [] [] }} ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Namespace="calico-system" Pod="whisker-679f97cfbd-8pkmh" WorkloadEndpoint="localhost-k8s-whisker--679f97cfbd--8pkmh-" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.141 [INFO][4270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Namespace="calico-system" Pod="whisker-679f97cfbd-8pkmh" WorkloadEndpoint="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.181 [INFO][4285] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" HandleID="k8s-pod-network.e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Workload="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.182 [INFO][4285] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" HandleID="k8s-pod-network.e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Workload="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033a640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-679f97cfbd-8pkmh", "timestamp":"2025-09-09 00:33:37.181934307 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.182 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.182 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.182 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.191 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" host="localhost" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.249 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.261 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.272 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.280 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.280 [INFO][4285] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" host="localhost" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.283 [INFO][4285] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957 Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.289 [INFO][4285] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" host="localhost" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.296 [INFO][4285] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" host="localhost" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.296 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" host="localhost" Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.296 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:37.406425 containerd[1459]: 2025-09-09 00:33:37.296 [INFO][4285] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" HandleID="k8s-pod-network.e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Workload="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" Sep 9 00:33:37.407097 containerd[1459]: 2025-09-09 00:33:37.306 [INFO][4270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Namespace="calico-system" Pod="whisker-679f97cfbd-8pkmh" WorkloadEndpoint="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--679f97cfbd--8pkmh-eth0", GenerateName:"whisker-679f97cfbd-", Namespace:"calico-system", SelfLink:"", UID:"b701252e-da97-40f7-ade2-d04596a03c3b", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"679f97cfbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-679f97cfbd-8pkmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1569974aa7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:37.407097 containerd[1459]: 2025-09-09 00:33:37.306 [INFO][4270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Namespace="calico-system" Pod="whisker-679f97cfbd-8pkmh" WorkloadEndpoint="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" Sep 9 00:33:37.407097 containerd[1459]: 2025-09-09 00:33:37.306 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1569974aa7d ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Namespace="calico-system" Pod="whisker-679f97cfbd-8pkmh" WorkloadEndpoint="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" Sep 9 00:33:37.407097 containerd[1459]: 2025-09-09 00:33:37.312 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Namespace="calico-system" Pod="whisker-679f97cfbd-8pkmh" WorkloadEndpoint="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" Sep 9 00:33:37.407097 containerd[1459]: 2025-09-09 00:33:37.313 [INFO][4270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Namespace="calico-system" Pod="whisker-679f97cfbd-8pkmh" WorkloadEndpoint="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--679f97cfbd--8pkmh-eth0", GenerateName:"whisker-679f97cfbd-", Namespace:"calico-system", SelfLink:"", UID:"b701252e-da97-40f7-ade2-d04596a03c3b", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"679f97cfbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957", Pod:"whisker-679f97cfbd-8pkmh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1569974aa7d", MAC:"6e:25:ef:f0:95:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:37.407097 containerd[1459]: 2025-09-09 00:33:37.402 [INFO][4270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957" Namespace="calico-system" Pod="whisker-679f97cfbd-8pkmh" WorkloadEndpoint="localhost-k8s-whisker--679f97cfbd--8pkmh-eth0" Sep 9 00:33:37.453611 containerd[1459]: time="2025-09-09T00:33:37.450731749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:37.453611 containerd[1459]: time="2025-09-09T00:33:37.450953777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:37.453611 containerd[1459]: time="2025-09-09T00:33:37.451040814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:37.453611 containerd[1459]: time="2025-09-09T00:33:37.451592067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:37.482674 systemd[1]: Started cri-containerd-e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957.scope - libcontainer container e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957. Sep 9 00:33:37.501398 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:37.516609 systemd-networkd[1388]: cali1cfef9747f2: Link UP Sep 9 00:33:37.517667 systemd-networkd[1388]: cali1cfef9747f2: Gained carrier Sep 9 00:33:37.549682 containerd[1459]: time="2025-09-09T00:33:37.549631491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679f97cfbd-8pkmh,Uid:b701252e-da97-40f7-ade2-d04596a03c3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957\"" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.336 [INFO][4331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0 calico-apiserver-54bdbb7f4b- calico-apiserver 8bfccb7d-be38-4d07-befc-a95d6bd2fcf0 1024 0 2025-09-09 00:33:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54bdbb7f4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54bdbb7f4b-g2fml eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1cfef9747f2 [] [] }} ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-g2fml" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.336 [INFO][4331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-g2fml" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.442 [INFO][4348] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" HandleID="k8s-pod-network.7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.442 [INFO][4348] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" HandleID="k8s-pod-network.7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54bdbb7f4b-g2fml", "timestamp":"2025-09-09 00:33:37.442600805 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.442 [INFO][4348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.442 [INFO][4348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.442 [INFO][4348] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.451 [INFO][4348] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" host="localhost" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.465 [INFO][4348] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.476 [INFO][4348] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.478 [INFO][4348] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.481 [INFO][4348] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.481 [INFO][4348] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" host="localhost" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.484 [INFO][4348] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0 Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.490 [INFO][4348] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" host="localhost" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.503 [INFO][4348] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" host="localhost" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.503 [INFO][4348] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" host="localhost" Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.503 [INFO][4348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:37.553016 containerd[1459]: 2025-09-09 00:33:37.504 [INFO][4348] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" HandleID="k8s-pod-network.7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:37.553643 containerd[1459]: 2025-09-09 00:33:37.510 [INFO][4331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-g2fml" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0", GenerateName:"calico-apiserver-54bdbb7f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bfccb7d-be38-4d07-befc-a95d6bd2fcf0", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bdbb7f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54bdbb7f4b-g2fml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cfef9747f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:37.553643 containerd[1459]: 2025-09-09 00:33:37.511 [INFO][4331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-g2fml" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:37.553643 containerd[1459]: 2025-09-09 00:33:37.511 [INFO][4331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cfef9747f2 ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-g2fml" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:37.553643 containerd[1459]: 2025-09-09 00:33:37.519 [INFO][4331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-g2fml" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:37.553643 containerd[1459]: 2025-09-09 00:33:37.519 [INFO][4331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-g2fml" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0", GenerateName:"calico-apiserver-54bdbb7f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bfccb7d-be38-4d07-befc-a95d6bd2fcf0", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bdbb7f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0", Pod:"calico-apiserver-54bdbb7f4b-g2fml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cfef9747f2", MAC:"72:8e:f3:89:b7:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:37.553643 containerd[1459]: 2025-09-09 00:33:37.536 [INFO][4331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-g2fml" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:37.557042 containerd[1459]: time="2025-09-09T00:33:37.556742701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:33:37.587591 systemd-networkd[1388]: vxlan.calico: Link UP Sep 9 00:33:37.587870 systemd-networkd[1388]: vxlan.calico: Gained carrier Sep 9 00:33:37.628188 containerd[1459]: time="2025-09-09T00:33:37.627997905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:37.628329 containerd[1459]: time="2025-09-09T00:33:37.628093278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:37.628329 containerd[1459]: time="2025-09-09T00:33:37.628228118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:37.628452 containerd[1459]: time="2025-09-09T00:33:37.628406642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:37.662901 systemd[1]: Started cri-containerd-7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0.scope - libcontainer container 7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0. Sep 9 00:33:37.681241 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:37.710140 containerd[1459]: time="2025-09-09T00:33:37.709995143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bdbb7f4b-g2fml,Uid:8bfccb7d-be38-4d07-befc-a95d6bd2fcf0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0\"" Sep 9 00:33:37.825158 containerd[1459]: time="2025-09-09T00:33:37.824211493Z" level=info msg="StopPodSandbox for \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\"" Sep 9 00:33:37.825158 containerd[1459]: time="2025-09-09T00:33:37.824271809Z" level=info msg="StopPodSandbox for \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\"" Sep 9 00:33:37.825158 containerd[1459]: time="2025-09-09T00:33:37.824721265Z" level=info msg="StopPodSandbox for \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\"" Sep 9 00:33:37.827489 kubelet[2558]: I0909 00:33:37.827430 2558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8ef4708-140d-4ce0-9ba9-5283231b6fd0" path="/var/lib/kubelet/pods/e8ef4708-140d-4ce0-9ba9-5283231b6fd0/volumes" Sep 9 00:33:37.843871 systemd-networkd[1388]: cali3c850b68f2e: Gained IPv6LL Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.008 [INFO][4535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.008 [INFO][4535] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" iface="eth0" netns="/var/run/netns/cni-18481cdb-c758-7d26-e08e-b04622d6a79e" Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.009 [INFO][4535] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" iface="eth0" netns="/var/run/netns/cni-18481cdb-c758-7d26-e08e-b04622d6a79e" Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.009 [INFO][4535] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" iface="eth0" netns="/var/run/netns/cni-18481cdb-c758-7d26-e08e-b04622d6a79e" Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.009 [INFO][4535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.009 [INFO][4535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.036 [INFO][4586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" HandleID="k8s-pod-network.5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.036 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.036 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.088 [WARNING][4586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" HandleID="k8s-pod-network.5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.088 [INFO][4586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" HandleID="k8s-pod-network.5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.091 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:38.098819 containerd[1459]: 2025-09-09 00:33:38.094 [INFO][4535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:38.099426 containerd[1459]: time="2025-09-09T00:33:38.099001110Z" level=info msg="TearDown network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\" successfully" Sep 9 00:33:38.099426 containerd[1459]: time="2025-09-09T00:33:38.099038372Z" level=info msg="StopPodSandbox for \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\" returns successfully" Sep 9 00:33:38.100325 containerd[1459]: time="2025-09-09T00:33:38.100272087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mndv,Uid:6bff8f74-70ce-4a82-bbde-261eec1d2cf4,Namespace:calico-system,Attempt:1,}" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.088 [INFO][4524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.091 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" iface="eth0" netns="/var/run/netns/cni-beb6b0cb-4d5e-31c5-c17c-0a30bc734e36" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.091 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" iface="eth0" netns="/var/run/netns/cni-beb6b0cb-4d5e-31c5-c17c-0a30bc734e36" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.092 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" iface="eth0" netns="/var/run/netns/cni-beb6b0cb-4d5e-31c5-c17c-0a30bc734e36" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.092 [INFO][4524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.092 [INFO][4524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.125 [INFO][4596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" HandleID="k8s-pod-network.43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.125 [INFO][4596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.126 [INFO][4596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.133 [WARNING][4596] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" HandleID="k8s-pod-network.43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.133 [INFO][4596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" HandleID="k8s-pod-network.43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.135 [INFO][4596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:38.142620 containerd[1459]: 2025-09-09 00:33:38.138 [INFO][4524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:38.143217 containerd[1459]: time="2025-09-09T00:33:38.142851748Z" level=info msg="TearDown network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\" successfully" Sep 9 00:33:38.143217 containerd[1459]: time="2025-09-09T00:33:38.142901914Z" level=info msg="StopPodSandbox for \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\" returns successfully" Sep 9 00:33:38.143386 kubelet[2558]: E0909 00:33:38.143345 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:38.144859 containerd[1459]: time="2025-09-09T00:33:38.144824776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k5d4c,Uid:d385d6d4-50de-4c07-ab5d-80d9b6911449,Namespace:kube-system,Attempt:1,}" Sep 9 00:33:38.158892 systemd[1]: run-netns-cni\x2dbeb6b0cb\x2d4d5e\x2d31c5\x2dc17c\x2d0a30bc734e36.mount: Deactivated successfully. Sep 9 00:33:38.159009 systemd[1]: run-netns-cni\x2d18481cdb\x2dc758\x2d7d26\x2de08e\x2db04622d6a79e.mount: Deactivated successfully. Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.089 [INFO][4523] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.090 [INFO][4523] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" iface="eth0" netns="/var/run/netns/cni-a6565a2b-f922-8110-bde3-a6aa9c0592ed" Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.090 [INFO][4523] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" iface="eth0" netns="/var/run/netns/cni-a6565a2b-f922-8110-bde3-a6aa9c0592ed" Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.090 [INFO][4523] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" iface="eth0" netns="/var/run/netns/cni-a6565a2b-f922-8110-bde3-a6aa9c0592ed" Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.090 [INFO][4523] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.090 [INFO][4523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.137 [INFO][4594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" HandleID="k8s-pod-network.f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.137 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.137 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.147 [WARNING][4594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" HandleID="k8s-pod-network.f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.147 [INFO][4594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" HandleID="k8s-pod-network.f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.150 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:38.161077 containerd[1459]: 2025-09-09 00:33:38.155 [INFO][4523] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:38.163113 containerd[1459]: time="2025-09-09T00:33:38.161593889Z" level=info msg="TearDown network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\" successfully" Sep 9 00:33:38.163113 containerd[1459]: time="2025-09-09T00:33:38.161629036Z" level=info msg="StopPodSandbox for \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\" returns successfully" Sep 9 00:33:38.163113 containerd[1459]: time="2025-09-09T00:33:38.162556071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bdbb7f4b-958qh,Uid:ba2d26e2-dcec-4788-8eff-95fec48da778,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:33:38.164253 systemd[1]: run-netns-cni\x2da6565a2b\x2df922\x2d8110\x2dbde3\x2da6aa9c0592ed.mount: Deactivated successfully. Sep 9 00:33:38.197584 kubelet[2558]: E0909 00:33:38.197535 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:38.289330 systemd-networkd[1388]: calif2a524e10bf: Link UP Sep 9 00:33:38.291089 systemd-networkd[1388]: calif2a524e10bf: Gained carrier Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.170 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5mndv-eth0 csi-node-driver- calico-system 6bff8f74-70ce-4a82-bbde-261eec1d2cf4 1050 0 2025-09-09 00:33:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5mndv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif2a524e10bf [] [] }} ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Namespace="calico-system" Pod="csi-node-driver-5mndv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mndv-" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.170 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Namespace="calico-system" Pod="csi-node-driver-5mndv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.211 [INFO][4624] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" HandleID="k8s-pod-network.49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.211 [INFO][4624] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" HandleID="k8s-pod-network.49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5mndv", "timestamp":"2025-09-09 00:33:38.211299928 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.211 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.211 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.211 [INFO][4624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.221 [INFO][4624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" host="localhost" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.243 [INFO][4624] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.254 [INFO][4624] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.257 [INFO][4624] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.260 [INFO][4624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.260 [INFO][4624] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" host="localhost" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.261 [INFO][4624] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.265 [INFO][4624] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" host="localhost" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.273 [INFO][4624] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" host="localhost" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.273 [INFO][4624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" host="localhost" Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.273 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:38.313050 containerd[1459]: 2025-09-09 00:33:38.273 [INFO][4624] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" HandleID="k8s-pod-network.49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.313922 containerd[1459]: 2025-09-09 00:33:38.280 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Namespace="calico-system" Pod="csi-node-driver-5mndv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mndv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mndv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bff8f74-70ce-4a82-bbde-261eec1d2cf4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5mndv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2a524e10bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:38.313922 containerd[1459]: 2025-09-09 00:33:38.280 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Namespace="calico-system" Pod="csi-node-driver-5mndv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.313922 containerd[1459]: 2025-09-09 00:33:38.282 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2a524e10bf ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Namespace="calico-system" Pod="csi-node-driver-5mndv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.313922 containerd[1459]: 2025-09-09 00:33:38.291 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Namespace="calico-system" Pod="csi-node-driver-5mndv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.313922 containerd[1459]: 2025-09-09 00:33:38.292 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Namespace="calico-system" Pod="csi-node-driver-5mndv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mndv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mndv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bff8f74-70ce-4a82-bbde-261eec1d2cf4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac", Pod:"csi-node-driver-5mndv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2a524e10bf", MAC:"f2:ad:ca:8e:ec:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:38.313922 containerd[1459]: 2025-09-09 00:33:38.306 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac" Namespace="calico-system" Pod="csi-node-driver-5mndv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:38.346031 containerd[1459]: time="2025-09-09T00:33:38.345868045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:38.346031 containerd[1459]: time="2025-09-09T00:33:38.345966164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:38.346031 containerd[1459]: time="2025-09-09T00:33:38.345984961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:38.346278 containerd[1459]: time="2025-09-09T00:33:38.346130971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:38.372892 systemd[1]: Started cri-containerd-49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac.scope - libcontainer container 49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac. Sep 9 00:33:38.392113 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:38.396175 systemd-networkd[1388]: calibf78bec3586: Link UP Sep 9 00:33:38.397571 systemd-networkd[1388]: calibf78bec3586: Gained carrier Sep 9 00:33:38.411180 containerd[1459]: time="2025-09-09T00:33:38.411128889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mndv,Uid:6bff8f74-70ce-4a82-bbde-261eec1d2cf4,Namespace:calico-system,Attempt:1,} returns sandbox id \"49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac\"" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.249 [INFO][4625] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0 coredns-674b8bbfcf- kube-system d385d6d4-50de-4c07-ab5d-80d9b6911449 1052 0 2025-09-09 00:32:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-k5d4c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibf78bec3586 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k5d4c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k5d4c-" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.249 [INFO][4625] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k5d4c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.315 [INFO][4661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" HandleID="k8s-pod-network.de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.315 [INFO][4661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" HandleID="k8s-pod-network.de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f850), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-k5d4c", "timestamp":"2025-09-09 00:33:38.315597401 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.316 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.316 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.316 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.326 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" host="localhost" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.342 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.356 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.359 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.362 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.362 [INFO][4661] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" host="localhost" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.366 [INFO][4661] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.372 [INFO][4661] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" host="localhost" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.384 [INFO][4661] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" host="localhost" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.384 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" host="localhost" Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.384 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:38.417891 containerd[1459]: 2025-09-09 00:33:38.384 [INFO][4661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" HandleID="k8s-pod-network.de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.419013 containerd[1459]: 2025-09-09 00:33:38.390 [INFO][4625] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k5d4c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d385d6d4-50de-4c07-ab5d-80d9b6911449", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-k5d4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf78bec3586", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:38.419013 containerd[1459]: 2025-09-09 00:33:38.391 [INFO][4625] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k5d4c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.419013 containerd[1459]: 2025-09-09 00:33:38.391 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf78bec3586 ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k5d4c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.419013 containerd[1459]: 2025-09-09 00:33:38.397 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k5d4c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.419013 containerd[1459]: 2025-09-09 00:33:38.398 [INFO][4625] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k5d4c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d385d6d4-50de-4c07-ab5d-80d9b6911449", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c", Pod:"coredns-674b8bbfcf-k5d4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf78bec3586", MAC:"5a:0d:ba:f3:90:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:38.419013 containerd[1459]: 2025-09-09 00:33:38.414 [INFO][4625] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c" Namespace="kube-system" Pod="coredns-674b8bbfcf-k5d4c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:38.444762 containerd[1459]: time="2025-09-09T00:33:38.444638969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:38.444762 containerd[1459]: time="2025-09-09T00:33:38.444697663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:38.444762 containerd[1459]: time="2025-09-09T00:33:38.444711179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:38.445005 containerd[1459]: time="2025-09-09T00:33:38.444783678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:38.468765 systemd[1]: Started cri-containerd-de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c.scope - libcontainer container de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c. Sep 9 00:33:38.486249 systemd-networkd[1388]: cali702ecb2a018: Link UP Sep 9 00:33:38.487806 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:38.490744 systemd-networkd[1388]: cali702ecb2a018: Gained carrier Sep 9 00:33:38.525531 containerd[1459]: time="2025-09-09T00:33:38.521258650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k5d4c,Uid:d385d6d4-50de-4c07-ab5d-80d9b6911449,Namespace:kube-system,Attempt:1,} returns sandbox id \"de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c\"" Sep 9 00:33:38.525718 kubelet[2558]: E0909 00:33:38.523099 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.262 [INFO][4645] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0 calico-apiserver-54bdbb7f4b- calico-apiserver ba2d26e2-dcec-4788-8eff-95fec48da778 1051 0 2025-09-09 00:33:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54bdbb7f4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54bdbb7f4b-958qh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali702ecb2a018 [] [] }} ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-958qh" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.263 [INFO][4645] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-958qh" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.344 [INFO][4669] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" HandleID="k8s-pod-network.cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.345 [INFO][4669] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" HandleID="k8s-pod-network.cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bfd40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54bdbb7f4b-958qh", "timestamp":"2025-09-09 00:33:38.344597058 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.345 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.384 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.384 [INFO][4669] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.428 [INFO][4669] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" host="localhost" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.443 [INFO][4669] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.456 [INFO][4669] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.458 [INFO][4669] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.461 [INFO][4669] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.461 [INFO][4669] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" host="localhost" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.462 [INFO][4669] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057 Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.467 [INFO][4669] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" host="localhost" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.476 [INFO][4669] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" host="localhost" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.476 [INFO][4669] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" host="localhost" Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.476 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:38.527120 containerd[1459]: 2025-09-09 00:33:38.476 [INFO][4669] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" HandleID="k8s-pod-network.cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.529690 containerd[1459]: 2025-09-09 00:33:38.480 [INFO][4645] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-958qh" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0", GenerateName:"calico-apiserver-54bdbb7f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba2d26e2-dcec-4788-8eff-95fec48da778", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bdbb7f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54bdbb7f4b-958qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali702ecb2a018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:38.529690 containerd[1459]: 2025-09-09 00:33:38.480 [INFO][4645] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-958qh" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.529690 containerd[1459]: 2025-09-09 00:33:38.480 [INFO][4645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali702ecb2a018 ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-958qh" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.529690 containerd[1459]: 2025-09-09 00:33:38.487 [INFO][4645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-958qh" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.529690 containerd[1459]: 2025-09-09 00:33:38.488 [INFO][4645] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-958qh" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0", GenerateName:"calico-apiserver-54bdbb7f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba2d26e2-dcec-4788-8eff-95fec48da778", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bdbb7f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057", Pod:"calico-apiserver-54bdbb7f4b-958qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali702ecb2a018", MAC:"c6:f6:03:66:7e:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:38.529690 containerd[1459]: 2025-09-09 00:33:38.504 [INFO][4645] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057" Namespace="calico-apiserver" Pod="calico-apiserver-54bdbb7f4b-958qh" WorkloadEndpoint="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:38.530265 containerd[1459]: time="2025-09-09T00:33:38.530220352Z" level=info msg="CreateContainer within sandbox \"de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:33:38.566082 containerd[1459]: time="2025-09-09T00:33:38.565986306Z" level=info msg="CreateContainer within sandbox \"de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc73b478819bad400242964c041cc0c3ac7e58e8d915ab0770fa5e1b1cb5e8fe\"" Sep 9 00:33:38.568868 containerd[1459]: time="2025-09-09T00:33:38.568757110Z" level=info msg="StartContainer for \"bc73b478819bad400242964c041cc0c3ac7e58e8d915ab0770fa5e1b1cb5e8fe\"" Sep 9 00:33:38.572524 containerd[1459]: time="2025-09-09T00:33:38.570813118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:38.572524 containerd[1459]: time="2025-09-09T00:33:38.570927669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:38.572524 containerd[1459]: time="2025-09-09T00:33:38.570947938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:38.572852 containerd[1459]: time="2025-09-09T00:33:38.572784884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:38.604694 systemd[1]: Started cri-containerd-cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057.scope - libcontainer container cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057. Sep 9 00:33:38.620781 systemd[1]: Started cri-containerd-bc73b478819bad400242964c041cc0c3ac7e58e8d915ab0770fa5e1b1cb5e8fe.scope - libcontainer container bc73b478819bad400242964c041cc0c3ac7e58e8d915ab0770fa5e1b1cb5e8fe. Sep 9 00:33:38.627077 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:38.662070 containerd[1459]: time="2025-09-09T00:33:38.661863730Z" level=info msg="StartContainer for \"bc73b478819bad400242964c041cc0c3ac7e58e8d915ab0770fa5e1b1cb5e8fe\" returns successfully" Sep 9 00:33:38.667839 containerd[1459]: time="2025-09-09T00:33:38.667793355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54bdbb7f4b-958qh,Uid:ba2d26e2-dcec-4788-8eff-95fec48da778,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057\"" Sep 9 00:33:38.823574 containerd[1459]: time="2025-09-09T00:33:38.823489016Z" level=info msg="StopPodSandbox for \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\"" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.884 [INFO][4880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.884 [INFO][4880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" iface="eth0" netns="/var/run/netns/cni-7d811a07-93ed-2817-db51-1e1cbcba1848" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.884 [INFO][4880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" iface="eth0" netns="/var/run/netns/cni-7d811a07-93ed-2817-db51-1e1cbcba1848" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.884 [INFO][4880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" iface="eth0" netns="/var/run/netns/cni-7d811a07-93ed-2817-db51-1e1cbcba1848" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.884 [INFO][4880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.885 [INFO][4880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.911 [INFO][4888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" HandleID="k8s-pod-network.f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.911 [INFO][4888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.911 [INFO][4888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.919 [WARNING][4888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" HandleID="k8s-pod-network.f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.920 [INFO][4888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" HandleID="k8s-pod-network.f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.923 [INFO][4888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:38.930141 containerd[1459]: 2025-09-09 00:33:38.926 [INFO][4880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:38.931188 containerd[1459]: time="2025-09-09T00:33:38.930194985Z" level=info msg="TearDown network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\" successfully" Sep 9 00:33:38.931188 containerd[1459]: time="2025-09-09T00:33:38.930231446Z" level=info msg="StopPodSandbox for \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\" returns successfully" Sep 9 00:33:38.931351 containerd[1459]: time="2025-09-09T00:33:38.931306375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qv7nj,Uid:a1fa408b-fdab-450d-b028-22c70c99492b,Namespace:calico-system,Attempt:1,}" Sep 9 00:33:38.994742 systemd-networkd[1388]: cali1569974aa7d: Gained IPv6LL Sep 9 00:33:39.077611 systemd-networkd[1388]: calib32cf7c3d2c: Link UP Sep 9 00:33:39.078063 systemd-networkd[1388]: calib32cf7c3d2c: Gained carrier Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:38.992 [INFO][4896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--qv7nj-eth0 goldmane-54d579b49d- calico-system a1fa408b-fdab-450d-b028-22c70c99492b 1081 0 2025-09-09 00:33:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-qv7nj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib32cf7c3d2c [] [] }} ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Namespace="calico-system" Pod="goldmane-54d579b49d-qv7nj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qv7nj-" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:38.992 [INFO][4896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Namespace="calico-system" Pod="goldmane-54d579b49d-qv7nj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.024 [INFO][4910] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" HandleID="k8s-pod-network.14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.024 [INFO][4910] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" HandleID="k8s-pod-network.14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-qv7nj", "timestamp":"2025-09-09 00:33:39.024633117 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.024 [INFO][4910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.025 [INFO][4910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.025 [INFO][4910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.034 [INFO][4910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" host="localhost" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.041 [INFO][4910] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.047 [INFO][4910] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.049 [INFO][4910] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.051 [INFO][4910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.051 [INFO][4910] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" host="localhost" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.053 [INFO][4910] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.058 [INFO][4910] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" host="localhost" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.066 [INFO][4910] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" host="localhost" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.066 [INFO][4910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" host="localhost" Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.066 [INFO][4910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:39.099606 containerd[1459]: 2025-09-09 00:33:39.066 [INFO][4910] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" HandleID="k8s-pod-network.14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:39.100575 containerd[1459]: 2025-09-09 00:33:39.071 [INFO][4896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Namespace="calico-system" Pod="goldmane-54d579b49d-qv7nj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qv7nj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a1fa408b-fdab-450d-b028-22c70c99492b", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-qv7nj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib32cf7c3d2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:39.100575 containerd[1459]: 2025-09-09 00:33:39.071 [INFO][4896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Namespace="calico-system" Pod="goldmane-54d579b49d-qv7nj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:39.100575 containerd[1459]: 2025-09-09 00:33:39.071 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib32cf7c3d2c ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Namespace="calico-system" Pod="goldmane-54d579b49d-qv7nj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:39.100575 containerd[1459]: 2025-09-09 00:33:39.075 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Namespace="calico-system" Pod="goldmane-54d579b49d-qv7nj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:39.100575 containerd[1459]: 2025-09-09 00:33:39.076 [INFO][4896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Namespace="calico-system" Pod="goldmane-54d579b49d-qv7nj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qv7nj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a1fa408b-fdab-450d-b028-22c70c99492b", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f", Pod:"goldmane-54d579b49d-qv7nj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib32cf7c3d2c", MAC:"86:4c:6c:c3:b7:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:39.100575 containerd[1459]: 2025-09-09 00:33:39.095 [INFO][4896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f" Namespace="calico-system" Pod="goldmane-54d579b49d-qv7nj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:39.140985 containerd[1459]: time="2025-09-09T00:33:39.140182694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:39.140985 containerd[1459]: time="2025-09-09T00:33:39.140542485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:39.140985 containerd[1459]: time="2025-09-09T00:33:39.140610407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:39.142010 containerd[1459]: time="2025-09-09T00:33:39.141913002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:39.164302 systemd[1]: run-netns-cni\x2d7d811a07\x2d93ed\x2d2817\x2ddb51\x2d1e1cbcba1848.mount: Deactivated successfully. Sep 9 00:33:39.188717 systemd[1]: Started cri-containerd-14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f.scope - libcontainer container 14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f. Sep 9 00:33:39.210003 kubelet[2558]: E0909 00:33:39.209613 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:39.208835 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:39.211033 kubelet[2558]: E0909 00:33:39.210989 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:39.255818 containerd[1459]: time="2025-09-09T00:33:39.255764222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-qv7nj,Uid:a1fa408b-fdab-450d-b028-22c70c99492b,Namespace:calico-system,Attempt:1,} returns sandbox id \"14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f\"" Sep 9 00:33:39.466625 containerd[1459]: time="2025-09-09T00:33:39.466362628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:39.468102 containerd[1459]: time="2025-09-09T00:33:39.468000158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:33:39.469974 containerd[1459]: time="2025-09-09T00:33:39.469927406Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:39.473046 containerd[1459]: time="2025-09-09T00:33:39.472996420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:39.473685 containerd[1459]: time="2025-09-09T00:33:39.473632533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.916842843s" Sep 9 00:33:39.473685 containerd[1459]: time="2025-09-09T00:33:39.473681558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:33:39.475337 containerd[1459]: time="2025-09-09T00:33:39.475290082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:33:39.479840 containerd[1459]: time="2025-09-09T00:33:39.479800960Z" level=info msg="CreateContainer within sandbox \"e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:33:39.496861 containerd[1459]: time="2025-09-09T00:33:39.496792758Z" level=info msg="CreateContainer within sandbox \"e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3408a66e40f56b08ba2b793cf02875357ecbb553c1a34b6a27faddd171b21d64\"" Sep 9 00:33:39.497765 containerd[1459]: time="2025-09-09T00:33:39.497675606Z" level=info msg="StartContainer for \"3408a66e40f56b08ba2b793cf02875357ecbb553c1a34b6a27faddd171b21d64\"" Sep 9 00:33:39.506013 systemd-networkd[1388]: calif2a524e10bf: Gained IPv6LL Sep 9 00:33:39.506870 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Sep 9 00:33:39.511511 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:59836.service - OpenSSH per-connection server daemon (10.0.0.1:59836). Sep 9 00:33:39.543789 systemd[1]: Started cri-containerd-3408a66e40f56b08ba2b793cf02875357ecbb553c1a34b6a27faddd171b21d64.scope - libcontainer container 3408a66e40f56b08ba2b793cf02875357ecbb553c1a34b6a27faddd171b21d64. Sep 9 00:33:39.568295 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 59836 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:33:39.569866 sshd[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:39.571179 systemd-networkd[1388]: cali1cfef9747f2: Gained IPv6LL Sep 9 00:33:39.578584 systemd-logind[1440]: New session 12 of user core. Sep 9 00:33:39.593265 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:33:39.614509 containerd[1459]: time="2025-09-09T00:33:39.614418509Z" level=info msg="StartContainer for \"3408a66e40f56b08ba2b793cf02875357ecbb553c1a34b6a27faddd171b21d64\" returns successfully" Sep 9 00:33:39.757868 sshd[4982]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:39.762854 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:59836.service: Deactivated successfully. Sep 9 00:33:39.765731 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:33:39.767354 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:33:39.768839 systemd-logind[1440]: Removed session 12. Sep 9 00:33:39.824059 containerd[1459]: time="2025-09-09T00:33:39.823787460Z" level=info msg="StopPodSandbox for \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\"" Sep 9 00:33:39.953693 systemd-networkd[1388]: calibf78bec3586: Gained IPv6LL Sep 9 00:33:40.029173 kubelet[2558]: I0909 00:33:40.028442 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k5d4c" podStartSLOduration=47.028413761 podStartE2EDuration="47.028413761s" podCreationTimestamp="2025-09-09 00:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:33:39.232249586 +0000 UTC m=+51.510374906" watchObservedRunningTime="2025-09-09 00:33:40.028413761 +0000 UTC m=+52.306538971" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.028 [INFO][5038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.028 [INFO][5038] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" iface="eth0" netns="/var/run/netns/cni-371ad5af-4c90-f5e8-29c6-4dbd5c8da800" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.029 [INFO][5038] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" iface="eth0" netns="/var/run/netns/cni-371ad5af-4c90-f5e8-29c6-4dbd5c8da800" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.029 [INFO][5038] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" iface="eth0" netns="/var/run/netns/cni-371ad5af-4c90-f5e8-29c6-4dbd5c8da800" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.029 [INFO][5038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.029 [INFO][5038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.055 [INFO][5047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" HandleID="k8s-pod-network.a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.067 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.067 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.073 [WARNING][5047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" HandleID="k8s-pod-network.a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.073 [INFO][5047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" HandleID="k8s-pod-network.a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.075 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.081894 containerd[1459]: 2025-09-09 00:33:40.078 [INFO][5038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:40.082643 containerd[1459]: time="2025-09-09T00:33:40.082065974Z" level=info msg="TearDown network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\" successfully" Sep 9 00:33:40.082643 containerd[1459]: time="2025-09-09T00:33:40.082099848Z" level=info msg="StopPodSandbox for \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\" returns successfully" Sep 9 00:33:40.083108 containerd[1459]: time="2025-09-09T00:33:40.083066536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f84b7d74d-mzdfw,Uid:f98696c6-3dc2-424e-80af-9614520b47e8,Namespace:calico-system,Attempt:1,}" Sep 9 00:33:40.154451 systemd[1]: run-netns-cni\x2d371ad5af\x2d4c90\x2df5e8\x2d29c6\x2d4dbd5c8da800.mount: Deactivated successfully. Sep 9 00:33:40.210331 systemd-networkd[1388]: cali702ecb2a018: Gained IPv6LL Sep 9 00:33:40.220812 kubelet[2558]: E0909 00:33:40.220748 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:40.346108 systemd-networkd[1388]: calid42824f79e1: Link UP Sep 9 00:33:40.347823 systemd-networkd[1388]: calid42824f79e1: Gained carrier Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.264 [INFO][5056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0 calico-kube-controllers-f84b7d74d- calico-system f98696c6-3dc2-424e-80af-9614520b47e8 1101 0 2025-09-09 00:33:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f84b7d74d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f84b7d74d-mzdfw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid42824f79e1 [] [] }} ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Namespace="calico-system" Pod="calico-kube-controllers-f84b7d74d-mzdfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.264 [INFO][5056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Namespace="calico-system" Pod="calico-kube-controllers-f84b7d74d-mzdfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.299 [INFO][5073] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" HandleID="k8s-pod-network.d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.299 [INFO][5073] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" HandleID="k8s-pod-network.d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005142d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f84b7d74d-mzdfw", "timestamp":"2025-09-09 00:33:40.299000602 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.299 [INFO][5073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.299 [INFO][5073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.299 [INFO][5073] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.308 [INFO][5073] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" host="localhost" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.314 [INFO][5073] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.319 [INFO][5073] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.320 [INFO][5073] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.324 [INFO][5073] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.324 [INFO][5073] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" host="localhost" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.325 [INFO][5073] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74 Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.331 [INFO][5073] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" host="localhost" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.339 [INFO][5073] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" host="localhost" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.339 [INFO][5073] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" host="localhost" Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.339 [INFO][5073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.361287 containerd[1459]: 2025-09-09 00:33:40.339 [INFO][5073] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" HandleID="k8s-pod-network.d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.361845 containerd[1459]: 2025-09-09 00:33:40.343 [INFO][5056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Namespace="calico-system" Pod="calico-kube-controllers-f84b7d74d-mzdfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0", GenerateName:"calico-kube-controllers-f84b7d74d-", Namespace:"calico-system", SelfLink:"", UID:"f98696c6-3dc2-424e-80af-9614520b47e8", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f84b7d74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f84b7d74d-mzdfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid42824f79e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.361845 containerd[1459]: 2025-09-09 00:33:40.343 [INFO][5056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Namespace="calico-system" Pod="calico-kube-controllers-f84b7d74d-mzdfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.361845 containerd[1459]: 2025-09-09 00:33:40.343 [INFO][5056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid42824f79e1 ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Namespace="calico-system" Pod="calico-kube-controllers-f84b7d74d-mzdfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.361845 containerd[1459]: 2025-09-09 00:33:40.347 [INFO][5056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Namespace="calico-system" Pod="calico-kube-controllers-f84b7d74d-mzdfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.361845 containerd[1459]: 2025-09-09 00:33:40.348 [INFO][5056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Namespace="calico-system" Pod="calico-kube-controllers-f84b7d74d-mzdfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0", GenerateName:"calico-kube-controllers-f84b7d74d-", Namespace:"calico-system", SelfLink:"", UID:"f98696c6-3dc2-424e-80af-9614520b47e8", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f84b7d74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74", Pod:"calico-kube-controllers-f84b7d74d-mzdfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid42824f79e1", MAC:"e6:ac:2f:68:d2:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.361845 containerd[1459]: 2025-09-09 00:33:40.357 [INFO][5056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74" Namespace="calico-system" Pod="calico-kube-controllers-f84b7d74d-mzdfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:40.382484 containerd[1459]: time="2025-09-09T00:33:40.382305940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:40.383537 containerd[1459]: time="2025-09-09T00:33:40.383135474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:40.383537 containerd[1459]: time="2025-09-09T00:33:40.383234455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:40.383645 containerd[1459]: time="2025-09-09T00:33:40.383579568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:40.411796 systemd[1]: Started cri-containerd-d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74.scope - libcontainer container d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74. Sep 9 00:33:40.427370 systemd-resolved[1347]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:40.458972 containerd[1459]: time="2025-09-09T00:33:40.458928961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f84b7d74d-mzdfw,Uid:f98696c6-3dc2-424e-80af-9614520b47e8,Namespace:calico-system,Attempt:1,} returns sandbox id \"d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74\"" Sep 9 00:33:40.721804 systemd-networkd[1388]: calib32cf7c3d2c: Gained IPv6LL Sep 9 00:33:41.224918 kubelet[2558]: E0909 00:33:41.224876 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:42.069114 systemd-networkd[1388]: calid42824f79e1: Gained IPv6LL Sep 9 00:33:42.226867 kubelet[2558]: E0909 00:33:42.226831 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:42.738904 containerd[1459]: time="2025-09-09T00:33:42.738831740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:42.739722 containerd[1459]: time="2025-09-09T00:33:42.739653478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:33:42.741062 containerd[1459]: time="2025-09-09T00:33:42.741006746Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:42.743841 containerd[1459]: time="2025-09-09T00:33:42.743779709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:42.744354 containerd[1459]: time="2025-09-09T00:33:42.744307142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.268968627s" Sep 9 00:33:42.744409 containerd[1459]: time="2025-09-09T00:33:42.744352218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:33:42.745654 containerd[1459]: time="2025-09-09T00:33:42.745622737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:33:42.792259 containerd[1459]: time="2025-09-09T00:33:42.791634285Z" level=info msg="CreateContainer within sandbox \"7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:33:42.829388 containerd[1459]: time="2025-09-09T00:33:42.829327532Z" level=info msg="CreateContainer within sandbox \"7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4d2da3b8053e0ea92435b62faad0a15638bd26474097b74d13ac951a514d390e\"" Sep 9 00:33:42.830155 containerd[1459]: time="2025-09-09T00:33:42.830105355Z" level=info msg="StartContainer for \"4d2da3b8053e0ea92435b62faad0a15638bd26474097b74d13ac951a514d390e\"" Sep 9 00:33:42.876765 systemd[1]: Started cri-containerd-4d2da3b8053e0ea92435b62faad0a15638bd26474097b74d13ac951a514d390e.scope - libcontainer container 4d2da3b8053e0ea92435b62faad0a15638bd26474097b74d13ac951a514d390e. Sep 9 00:33:43.039321 containerd[1459]: time="2025-09-09T00:33:43.039152438Z" level=info msg="StartContainer for \"4d2da3b8053e0ea92435b62faad0a15638bd26474097b74d13ac951a514d390e\" returns successfully" Sep 9 00:33:43.341627 kubelet[2558]: I0909 00:33:43.341262 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-g2fml" podStartSLOduration=33.307810089 podStartE2EDuration="38.341243507s" podCreationTimestamp="2025-09-09 00:33:05 +0000 UTC" firstStartedPulling="2025-09-09 00:33:37.711966181 +0000 UTC m=+49.990091401" lastFinishedPulling="2025-09-09 00:33:42.745399609 +0000 UTC m=+55.023524819" observedRunningTime="2025-09-09 00:33:43.341056478 +0000 UTC m=+55.619181688" watchObservedRunningTime="2025-09-09 00:33:43.341243507 +0000 UTC m=+55.619368717" Sep 9 00:33:44.232454 kubelet[2558]: I0909 00:33:44.232406 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:33:44.783774 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:50612.service - OpenSSH per-connection server daemon (10.0.0.1:50612). Sep 9 00:33:44.823625 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 50612 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:33:44.825787 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:44.830338 systemd-logind[1440]: New session 13 of user core. Sep 9 00:33:44.837661 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:33:45.052393 sshd[5192]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:45.064268 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:50612.service: Deactivated successfully. Sep 9 00:33:45.067020 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:33:45.070508 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:33:45.079266 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:50620.service - OpenSSH per-connection server daemon (10.0.0.1:50620). Sep 9 00:33:45.080677 systemd-logind[1440]: Removed session 13. Sep 9 00:33:45.085774 containerd[1459]: time="2025-09-09T00:33:45.085736459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:45.086620 containerd[1459]: time="2025-09-09T00:33:45.086582140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:33:45.087829 containerd[1459]: time="2025-09-09T00:33:45.087781378Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:45.093235 containerd[1459]: time="2025-09-09T00:33:45.093153543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:45.093937 containerd[1459]: time="2025-09-09T00:33:45.093894172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.348229836s" Sep 9 00:33:45.093989 containerd[1459]: time="2025-09-09T00:33:45.093945681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:33:45.096501 containerd[1459]: time="2025-09-09T00:33:45.096325361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:33:45.100732 containerd[1459]: time="2025-09-09T00:33:45.100695125Z" level=info msg="CreateContainer within sandbox \"49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:33:45.111884 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 50620 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:33:45.113821 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:45.119940 systemd-logind[1440]: New session 14 of user core. Sep 9 00:33:45.127680 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:33:45.135260 containerd[1459]: time="2025-09-09T00:33:45.135211032Z" level=info msg="CreateContainer within sandbox \"49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"32bbddc3d925f6e951f381049eb2465503531b581f3cc8b4bae86f208ec76c58\"" Sep 9 00:33:45.136153 containerd[1459]: time="2025-09-09T00:33:45.136126607Z" level=info msg="StartContainer for \"32bbddc3d925f6e951f381049eb2465503531b581f3cc8b4bae86f208ec76c58\"" Sep 9 00:33:45.172640 systemd[1]: Started cri-containerd-32bbddc3d925f6e951f381049eb2465503531b581f3cc8b4bae86f208ec76c58.scope - libcontainer container 32bbddc3d925f6e951f381049eb2465503531b581f3cc8b4bae86f208ec76c58. Sep 9 00:33:45.212382 containerd[1459]: time="2025-09-09T00:33:45.212340493Z" level=info msg="StartContainer for \"32bbddc3d925f6e951f381049eb2465503531b581f3cc8b4bae86f208ec76c58\" returns successfully" Sep 9 00:33:45.301754 sshd[5211]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:45.318718 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:50620.service: Deactivated successfully. Sep 9 00:33:45.321622 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:33:45.323677 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:33:45.334094 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:50622.service - OpenSSH per-connection server daemon (10.0.0.1:50622). Sep 9 00:33:45.335152 systemd-logind[1440]: Removed session 14. Sep 9 00:33:45.369287 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 50622 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:33:45.371224 sshd[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:45.376251 systemd-logind[1440]: New session 15 of user core. Sep 9 00:33:45.385711 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:33:45.521920 sshd[5261]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:45.527195 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:50622.service: Deactivated successfully. Sep 9 00:33:45.529489 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:33:45.530446 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:33:45.531676 systemd-logind[1440]: Removed session 15. Sep 9 00:33:45.535830 containerd[1459]: time="2025-09-09T00:33:45.535765051Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:45.536669 containerd[1459]: time="2025-09-09T00:33:45.536619890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:33:45.539495 containerd[1459]: time="2025-09-09T00:33:45.539405809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 443.038447ms" Sep 9 00:33:45.539495 containerd[1459]: time="2025-09-09T00:33:45.539477426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:33:45.540550 containerd[1459]: time="2025-09-09T00:33:45.540513461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:33:45.544881 containerd[1459]: time="2025-09-09T00:33:45.544846045Z" level=info msg="CreateContainer within sandbox \"cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:33:45.561717 containerd[1459]: time="2025-09-09T00:33:45.561653775Z" level=info msg="CreateContainer within sandbox \"cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a0d7ebf03041356d6e7bfcc4bc697997aa89516fcc6e3adde44d01839973abe\"" Sep 9 00:33:45.562401 containerd[1459]: time="2025-09-09T00:33:45.562361141Z" level=info msg="StartContainer for \"1a0d7ebf03041356d6e7bfcc4bc697997aa89516fcc6e3adde44d01839973abe\"" Sep 9 00:33:45.600762 systemd[1]: Started cri-containerd-1a0d7ebf03041356d6e7bfcc4bc697997aa89516fcc6e3adde44d01839973abe.scope - libcontainer container 1a0d7ebf03041356d6e7bfcc4bc697997aa89516fcc6e3adde44d01839973abe. Sep 9 00:33:45.645955 containerd[1459]: time="2025-09-09T00:33:45.645903841Z" level=info msg="StartContainer for \"1a0d7ebf03041356d6e7bfcc4bc697997aa89516fcc6e3adde44d01839973abe\" returns successfully" Sep 9 00:33:47.136414 kubelet[2558]: I0909 00:33:47.136318 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54bdbb7f4b-958qh" podStartSLOduration=35.265961455 podStartE2EDuration="42.13629608s" podCreationTimestamp="2025-09-09 00:33:05 +0000 UTC" firstStartedPulling="2025-09-09 00:33:38.670034019 +0000 UTC m=+50.948159229" lastFinishedPulling="2025-09-09 00:33:45.540368644 +0000 UTC m=+57.818493854" observedRunningTime="2025-09-09 00:33:46.253972816 +0000 UTC m=+58.532098037" watchObservedRunningTime="2025-09-09 00:33:47.13629608 +0000 UTC m=+59.414421290" Sep 9 00:33:47.811922 containerd[1459]: time="2025-09-09T00:33:47.811880832Z" level=info msg="StopPodSandbox for \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\"" Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.861 [WARNING][5330] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mndv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bff8f74-70ce-4a82-bbde-261eec1d2cf4", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac", Pod:"csi-node-driver-5mndv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2a524e10bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.862 [INFO][5330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.862 [INFO][5330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" iface="eth0" netns="" Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.862 [INFO][5330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.862 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.896 [INFO][5341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" HandleID="k8s-pod-network.5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.896 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.896 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.903 [WARNING][5341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" HandleID="k8s-pod-network.5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.903 [INFO][5341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" HandleID="k8s-pod-network.5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.905 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:47.911823 containerd[1459]: 2025-09-09 00:33:47.908 [INFO][5330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:47.912282 containerd[1459]: time="2025-09-09T00:33:47.911875988Z" level=info msg="TearDown network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\" successfully" Sep 9 00:33:47.912282 containerd[1459]: time="2025-09-09T00:33:47.911911295Z" level=info msg="StopPodSandbox for \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\" returns successfully" Sep 9 00:33:47.926663 containerd[1459]: time="2025-09-09T00:33:47.912636935Z" level=info msg="RemovePodSandbox for \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\"" Sep 9 00:33:47.926663 containerd[1459]: time="2025-09-09T00:33:47.914963478Z" level=info msg="Forcibly stopping sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\"" Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:47.968 [WARNING][5359] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mndv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bff8f74-70ce-4a82-bbde-261eec1d2cf4", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac", Pod:"csi-node-driver-5mndv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2a524e10bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:47.968 [INFO][5359] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:47.968 [INFO][5359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" iface="eth0" netns="" Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:47.968 [INFO][5359] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:47.968 [INFO][5359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:47.999 [INFO][5373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" HandleID="k8s-pod-network.5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:48.000 [INFO][5373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:48.000 [INFO][5373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:48.006 [WARNING][5373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" HandleID="k8s-pod-network.5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:48.006 [INFO][5373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" HandleID="k8s-pod-network.5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Workload="localhost-k8s-csi--node--driver--5mndv-eth0" Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:48.008 [INFO][5373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:48.018214 containerd[1459]: 2025-09-09 00:33:48.011 [INFO][5359] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1" Sep 9 00:33:48.018214 containerd[1459]: time="2025-09-09T00:33:48.016091626Z" level=info msg="TearDown network for sandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\" successfully" Sep 9 00:33:48.024639 containerd[1459]: time="2025-09-09T00:33:48.024608030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:48.024843 containerd[1459]: time="2025-09-09T00:33:48.024818303Z" level=info msg="RemovePodSandbox \"5fe0873675196bb134ca35ba1d13efffa1df17a72faa2daf7b17c8e18b02a6d1\" returns successfully" Sep 9 00:33:48.025673 containerd[1459]: time="2025-09-09T00:33:48.025635306Z" level=info msg="StopPodSandbox for \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\"" Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.071 [WARNING][5396] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0", GenerateName:"calico-apiserver-54bdbb7f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba2d26e2-dcec-4788-8eff-95fec48da778", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bdbb7f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057", Pod:"calico-apiserver-54bdbb7f4b-958qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali702ecb2a018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.071 [INFO][5396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.072 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" iface="eth0" netns="" Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.072 [INFO][5396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.072 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.095 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" HandleID="k8s-pod-network.f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.096 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.096 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.103 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" HandleID="k8s-pod-network.f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.103 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" HandleID="k8s-pod-network.f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.105 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:48.111545 containerd[1459]: 2025-09-09 00:33:48.107 [INFO][5396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:48.111545 containerd[1459]: time="2025-09-09T00:33:48.111431292Z" level=info msg="TearDown network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\" successfully" Sep 9 00:33:48.111545 containerd[1459]: time="2025-09-09T00:33:48.111474165Z" level=info msg="StopPodSandbox for \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\" returns successfully" Sep 9 00:33:48.112203 containerd[1459]: time="2025-09-09T00:33:48.112058382Z" level=info msg="RemovePodSandbox for \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\"" Sep 9 00:33:48.112203 containerd[1459]: time="2025-09-09T00:33:48.112106555Z" level=info msg="Forcibly stopping sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\"" Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.152 [WARNING][5423] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0", GenerateName:"calico-apiserver-54bdbb7f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba2d26e2-dcec-4788-8eff-95fec48da778", ResourceVersion:"1169", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bdbb7f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfd6b6af2cabdf104de0d5bb4997919865659c3b963fb8385cd2f839f2ee7057", Pod:"calico-apiserver-54bdbb7f4b-958qh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali702ecb2a018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.153 [INFO][5423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.153 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" iface="eth0" netns="" Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.153 [INFO][5423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.153 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.179 [INFO][5432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" HandleID="k8s-pod-network.f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.180 [INFO][5432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.180 [INFO][5432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.188 [WARNING][5432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" HandleID="k8s-pod-network.f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.188 [INFO][5432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" HandleID="k8s-pod-network.f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--958qh-eth0" Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.193 [INFO][5432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:48.204498 containerd[1459]: 2025-09-09 00:33:48.197 [INFO][5423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec" Sep 9 00:33:48.204498 containerd[1459]: time="2025-09-09T00:33:48.202301547Z" level=info msg="TearDown network for sandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\" successfully" Sep 9 00:33:48.258727 containerd[1459]: time="2025-09-09T00:33:48.258654202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:48.258727 containerd[1459]: time="2025-09-09T00:33:48.258732482Z" level=info msg="RemovePodSandbox \"f36695deea7fd3d76aae4a9df56761125f33a1e57733cbb38625eeba200427ec\" returns successfully" Sep 9 00:33:48.259273 containerd[1459]: time="2025-09-09T00:33:48.259219594Z" level=info msg="StopPodSandbox for \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\"" Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.312 [WARNING][5453] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d385d6d4-50de-4c07-ab5d-80d9b6911449", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c", Pod:"coredns-674b8bbfcf-k5d4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf78bec3586", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.312 [INFO][5453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.312 [INFO][5453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" iface="eth0" netns="" Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.312 [INFO][5453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.312 [INFO][5453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.339 [INFO][5462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" HandleID="k8s-pod-network.43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.340 [INFO][5462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.340 [INFO][5462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.347 [WARNING][5462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" HandleID="k8s-pod-network.43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.347 [INFO][5462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" HandleID="k8s-pod-network.43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.349 [INFO][5462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:48.357068 containerd[1459]: 2025-09-09 00:33:48.353 [INFO][5453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:48.358032 containerd[1459]: time="2025-09-09T00:33:48.357135308Z" level=info msg="TearDown network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\" successfully" Sep 9 00:33:48.358032 containerd[1459]: time="2025-09-09T00:33:48.357168190Z" level=info msg="StopPodSandbox for \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\" returns successfully" Sep 9 00:33:48.358032 containerd[1459]: time="2025-09-09T00:33:48.357820589Z" level=info msg="RemovePodSandbox for \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\"" Sep 9 00:33:48.358032 containerd[1459]: time="2025-09-09T00:33:48.357850786Z" level=info msg="Forcibly stopping sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\"" Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.402 [WARNING][5480] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d385d6d4-50de-4c07-ab5d-80d9b6911449", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de574c26ca2e9d1d6b0090498ae08e7e9e782570365f22a09cfb62893b60699c", Pod:"coredns-674b8bbfcf-k5d4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf78bec3586", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.402 [INFO][5480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.402 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" iface="eth0" netns="" Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.402 [INFO][5480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.402 [INFO][5480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.429 [INFO][5488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" HandleID="k8s-pod-network.43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.430 [INFO][5488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.430 [INFO][5488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.437 [WARNING][5488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" HandleID="k8s-pod-network.43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.437 [INFO][5488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" HandleID="k8s-pod-network.43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Workload="localhost-k8s-coredns--674b8bbfcf--k5d4c-eth0" Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.438 [INFO][5488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:48.447641 containerd[1459]: 2025-09-09 00:33:48.443 [INFO][5480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6" Sep 9 00:33:48.447641 containerd[1459]: time="2025-09-09T00:33:48.446394449Z" level=info msg="TearDown network for sandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\" successfully" Sep 9 00:33:48.468787 containerd[1459]: time="2025-09-09T00:33:48.468718701Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:48.468961 containerd[1459]: time="2025-09-09T00:33:48.468851625Z" level=info msg="RemovePodSandbox \"43a46109e86845d8eb8eade246fdd7b8f73a9e4a0f9e04383c57ab3b4e97c2a6\" returns successfully" Sep 9 00:33:48.469446 containerd[1459]: time="2025-09-09T00:33:48.469414522Z" level=info msg="StopPodSandbox for \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\"" Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.515 [WARNING][5505] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0", GenerateName:"calico-kube-controllers-f84b7d74d-", Namespace:"calico-system", SelfLink:"", UID:"f98696c6-3dc2-424e-80af-9614520b47e8", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f84b7d74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74", Pod:"calico-kube-controllers-f84b7d74d-mzdfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid42824f79e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.516 [INFO][5505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.516 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" iface="eth0" netns="" Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.516 [INFO][5505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.516 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.623 [INFO][5514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" HandleID="k8s-pod-network.a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.624 [INFO][5514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.624 [INFO][5514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.629 [WARNING][5514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" HandleID="k8s-pod-network.a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.629 [INFO][5514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" HandleID="k8s-pod-network.a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.631 [INFO][5514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:48.638257 containerd[1459]: 2025-09-09 00:33:48.635 [INFO][5505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:48.638820 containerd[1459]: time="2025-09-09T00:33:48.638303954Z" level=info msg="TearDown network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\" successfully" Sep 9 00:33:48.638820 containerd[1459]: time="2025-09-09T00:33:48.638344782Z" level=info msg="StopPodSandbox for \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\" returns successfully" Sep 9 00:33:48.639093 containerd[1459]: time="2025-09-09T00:33:48.639047477Z" level=info msg="RemovePodSandbox for \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\"" Sep 9 00:33:48.639147 containerd[1459]: time="2025-09-09T00:33:48.639104906Z" level=info msg="Forcibly stopping sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\"" Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.679 [WARNING][5533] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0", GenerateName:"calico-kube-controllers-f84b7d74d-", Namespace:"calico-system", SelfLink:"", UID:"f98696c6-3dc2-424e-80af-9614520b47e8", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f84b7d74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74", Pod:"calico-kube-controllers-f84b7d74d-mzdfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid42824f79e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.679 [INFO][5533] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.679 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" iface="eth0" netns="" Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.679 [INFO][5533] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.679 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.705 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" HandleID="k8s-pod-network.a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.705 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.705 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.712 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" HandleID="k8s-pod-network.a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.712 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" HandleID="k8s-pod-network.a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Workload="localhost-k8s-calico--kube--controllers--f84b7d74d--mzdfw-eth0" Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.713 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:48.720673 containerd[1459]: 2025-09-09 00:33:48.716 [INFO][5533] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a" Sep 9 00:33:48.720656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430269142.mount: Deactivated successfully. Sep 9 00:33:48.721375 containerd[1459]: time="2025-09-09T00:33:48.720677172Z" level=info msg="TearDown network for sandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\" successfully" Sep 9 00:33:48.923019 containerd[1459]: time="2025-09-09T00:33:48.922941838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:48.923019 containerd[1459]: time="2025-09-09T00:33:48.923030287Z" level=info msg="RemovePodSandbox \"a1be72d2d53b82499ff272760a3e66938824976f6885d8967d33a3365c37cc3a\" returns successfully" Sep 9 00:33:48.923675 containerd[1459]: time="2025-09-09T00:33:48.923557507Z" level=info msg="StopPodSandbox for \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\"" Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:48.966 [WARNING][5560] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c19eda42-31fe-4d32-886d-b80d09ebda88", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8", Pod:"coredns-674b8bbfcf-dvwkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c850b68f2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:48.967 [INFO][5560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:48.967 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" iface="eth0" netns="" Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:48.967 [INFO][5560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:48.967 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:48.999 [INFO][5572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" HandleID="k8s-pod-network.f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:48.999 [INFO][5572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:48.999 [INFO][5572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:49.007 [WARNING][5572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" HandleID="k8s-pod-network.f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:49.007 [INFO][5572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" HandleID="k8s-pod-network.f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:49.009 [INFO][5572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:49.016037 containerd[1459]: 2025-09-09 00:33:49.012 [INFO][5560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:49.016889 containerd[1459]: time="2025-09-09T00:33:49.016079025Z" level=info msg="TearDown network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\" successfully" Sep 9 00:33:49.016889 containerd[1459]: time="2025-09-09T00:33:49.016112660Z" level=info msg="StopPodSandbox for \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\" returns successfully" Sep 9 00:33:49.016889 containerd[1459]: time="2025-09-09T00:33:49.016735811Z" level=info msg="RemovePodSandbox for \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\"" Sep 9 00:33:49.016889 containerd[1459]: time="2025-09-09T00:33:49.016773052Z" level=info msg="Forcibly stopping sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\"" Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.059 [WARNING][5590] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c19eda42-31fe-4d32-886d-b80d09ebda88", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01d2cf40d2010112fcaa4cf91ec3ee9eea7cd0317cb105fcb2528a4ffa537fa8", Pod:"coredns-674b8bbfcf-dvwkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c850b68f2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.061 [INFO][5590] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.061 [INFO][5590] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" iface="eth0" netns="" Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.061 [INFO][5590] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.061 [INFO][5590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.089 [INFO][5599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" HandleID="k8s-pod-network.f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.090 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.090 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.096 [WARNING][5599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" HandleID="k8s-pod-network.f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.096 [INFO][5599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" HandleID="k8s-pod-network.f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Workload="localhost-k8s-coredns--674b8bbfcf--dvwkz-eth0" Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.097 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:49.104704 containerd[1459]: 2025-09-09 00:33:49.100 [INFO][5590] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995" Sep 9 00:33:49.105549 containerd[1459]: time="2025-09-09T00:33:49.104751647Z" level=info msg="TearDown network for sandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\" successfully" Sep 9 00:33:49.141293 containerd[1459]: time="2025-09-09T00:33:49.141188282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:49.141495 containerd[1459]: time="2025-09-09T00:33:49.141314504Z" level=info msg="RemovePodSandbox \"f30401ffcbc39a8cfcb6a4cc67a618118ae21629b419bd21390b70f506656995\" returns successfully" Sep 9 00:33:49.142171 containerd[1459]: time="2025-09-09T00:33:49.142137227Z" level=info msg="StopPodSandbox for \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\"" Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.182 [WARNING][5616] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qv7nj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a1fa408b-fdab-450d-b028-22c70c99492b", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f", Pod:"goldmane-54d579b49d-qv7nj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib32cf7c3d2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.182 [INFO][5616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.182 [INFO][5616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" iface="eth0" netns="" Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.182 [INFO][5616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.182 [INFO][5616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.212 [INFO][5626] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" HandleID="k8s-pod-network.f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.212 [INFO][5626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.213 [INFO][5626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.221 [WARNING][5626] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" HandleID="k8s-pod-network.f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.221 [INFO][5626] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" HandleID="k8s-pod-network.f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.222 [INFO][5626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:49.229191 containerd[1459]: 2025-09-09 00:33:49.226 [INFO][5616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:49.229821 containerd[1459]: time="2025-09-09T00:33:49.229211502Z" level=info msg="TearDown network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\" successfully" Sep 9 00:33:49.229821 containerd[1459]: time="2025-09-09T00:33:49.229244685Z" level=info msg="StopPodSandbox for \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\" returns successfully" Sep 9 00:33:49.230431 containerd[1459]: time="2025-09-09T00:33:49.230347574Z" level=info msg="RemovePodSandbox for \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\"" Sep 9 00:33:49.230431 containerd[1459]: time="2025-09-09T00:33:49.230386710Z" level=info msg="Forcibly stopping sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\"" Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.277 [WARNING][5642] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--qv7nj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a1fa408b-fdab-450d-b028-22c70c99492b", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f", Pod:"goldmane-54d579b49d-qv7nj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib32cf7c3d2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.280 [INFO][5642] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.280 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" iface="eth0" netns="" Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.280 [INFO][5642] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.280 [INFO][5642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.308 [INFO][5651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" HandleID="k8s-pod-network.f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.308 [INFO][5651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.310 [INFO][5651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.318 [WARNING][5651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" HandleID="k8s-pod-network.f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.318 [INFO][5651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" HandleID="k8s-pod-network.f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Workload="localhost-k8s-goldmane--54d579b49d--qv7nj-eth0" Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.320 [INFO][5651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:49.327282 containerd[1459]: 2025-09-09 00:33:49.322 [INFO][5642] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2" Sep 9 00:33:49.327282 containerd[1459]: time="2025-09-09T00:33:49.327176332Z" level=info msg="TearDown network for sandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\" successfully" Sep 9 00:33:50.025550 containerd[1459]: time="2025-09-09T00:33:50.025450495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:50.026042 containerd[1459]: time="2025-09-09T00:33:50.025565345Z" level=info msg="RemovePodSandbox \"f48d618844f1f439e3983402e9467ea1e887df7c8ddbc350b6cba6aef3ea11b2\" returns successfully" Sep 9 00:33:50.026326 containerd[1459]: time="2025-09-09T00:33:50.026261215Z" level=info msg="StopPodSandbox for \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\"" Sep 9 00:33:50.047526 containerd[1459]: time="2025-09-09T00:33:50.047446936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:50.049652 containerd[1459]: time="2025-09-09T00:33:50.049573581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:33:50.051481 containerd[1459]: time="2025-09-09T00:33:50.051414219Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:50.054245 containerd[1459]: time="2025-09-09T00:33:50.054208819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:50.055390 containerd[1459]: time="2025-09-09T00:33:50.055250350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.514703124s" Sep 9 00:33:50.055390 containerd[1459]: time="2025-09-09T00:33:50.055287372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:33:50.056494 containerd[1459]: time="2025-09-09T00:33:50.056438170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:33:50.062274 containerd[1459]: time="2025-09-09T00:33:50.062227566Z" level=info msg="CreateContainer within sandbox \"14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:33:50.081334 containerd[1459]: time="2025-09-09T00:33:50.081281533Z" level=info msg="CreateContainer within sandbox \"14a3ccc67c0697116311f5c6032736f34f993b57bce81a210b9d38ff3014183f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"508761a29c586e19366b3df1dd898821ae138f5f7cca7808a9a0a096b8747b14\"" Sep 9 00:33:50.085027 containerd[1459]: time="2025-09-09T00:33:50.084987927Z" level=info msg="StartContainer for \"508761a29c586e19366b3df1dd898821ae138f5f7cca7808a9a0a096b8747b14\"" Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.065 [WARNING][5669] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0", GenerateName:"calico-apiserver-54bdbb7f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bfccb7d-be38-4d07-befc-a95d6bd2fcf0", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bdbb7f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0", Pod:"calico-apiserver-54bdbb7f4b-g2fml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cfef9747f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.066 [INFO][5669] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.066 [INFO][5669] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" iface="eth0" netns="" Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.066 [INFO][5669] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.066 [INFO][5669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.092 [INFO][5682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" HandleID="k8s-pod-network.5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.092 [INFO][5682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.092 [INFO][5682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.102 [WARNING][5682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" HandleID="k8s-pod-network.5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.102 [INFO][5682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" HandleID="k8s-pod-network.5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.104 [INFO][5682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:50.111041 containerd[1459]: 2025-09-09 00:33:50.107 [INFO][5669] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:50.111041 containerd[1459]: time="2025-09-09T00:33:50.110883600Z" level=info msg="TearDown network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\" successfully" Sep 9 00:33:50.111041 containerd[1459]: time="2025-09-09T00:33:50.110917185Z" level=info msg="StopPodSandbox for \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\" returns successfully" Sep 9 00:33:50.111800 containerd[1459]: time="2025-09-09T00:33:50.111656016Z" level=info msg="RemovePodSandbox for \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\"" Sep 9 00:33:50.111800 containerd[1459]: time="2025-09-09T00:33:50.111690483Z" level=info msg="Forcibly stopping sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\"" Sep 9 00:33:50.120621 systemd[1]: Started cri-containerd-508761a29c586e19366b3df1dd898821ae138f5f7cca7808a9a0a096b8747b14.scope - libcontainer container 508761a29c586e19366b3df1dd898821ae138f5f7cca7808a9a0a096b8747b14. Sep 9 00:33:50.336120 containerd[1459]: time="2025-09-09T00:33:50.335767351Z" level=info msg="StartContainer for \"508761a29c586e19366b3df1dd898821ae138f5f7cca7808a9a0a096b8747b14\" returns successfully" Sep 9 00:33:50.534884 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:55352.service - OpenSSH per-connection server daemon (10.0.0.1:55352). Sep 9 00:33:50.610124 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 55352 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:33:50.612560 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:50.616648 systemd-logind[1440]: New session 16 of user core. Sep 9 00:33:50.626655 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.623 [WARNING][5717] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0", GenerateName:"calico-apiserver-54bdbb7f4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bfccb7d-be38-4d07-befc-a95d6bd2fcf0", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54bdbb7f4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ffe0ab31d01f397695846bd24913a113e2bbe92518da884d3bc672d8c1f56a0", Pod:"calico-apiserver-54bdbb7f4b-g2fml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1cfef9747f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.623 [INFO][5717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.623 [INFO][5717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" iface="eth0" netns="" Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.623 [INFO][5717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.623 [INFO][5717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.645 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" HandleID="k8s-pod-network.5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.646 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.646 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.700 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" HandleID="k8s-pod-network.5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.700 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" HandleID="k8s-pod-network.5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Workload="localhost-k8s-calico--apiserver--54bdbb7f4b--g2fml-eth0" Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.702 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:50.707136 containerd[1459]: 2025-09-09 00:33:50.704 [INFO][5717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed" Sep 9 00:33:50.707884 containerd[1459]: time="2025-09-09T00:33:50.707207150Z" level=info msg="TearDown network for sandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\" successfully" Sep 9 00:33:51.030839 containerd[1459]: time="2025-09-09T00:33:51.030790554Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:51.031296 containerd[1459]: time="2025-09-09T00:33:51.030868483Z" level=info msg="RemovePodSandbox \"5a765d8a77395b1e1b0c36ea43c06130b35e70b8eba1e92ac7e66eda43ab69ed\" returns successfully" Sep 9 00:33:51.031642 containerd[1459]: time="2025-09-09T00:33:51.031588178Z" level=info msg="StopPodSandbox for \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\"" Sep 9 00:33:51.044570 sshd[5749]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:51.048785 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:55352.service: Deactivated successfully. Sep 9 00:33:51.050952 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:33:51.051712 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:33:51.052683 systemd-logind[1440]: Removed session 16. Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.065 [WARNING][5779] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" WorkloadEndpoint="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.065 [INFO][5779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.066 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" iface="eth0" netns="" Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.066 [INFO][5779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.066 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.085 [INFO][5789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" HandleID="k8s-pod-network.c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Workload="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.086 [INFO][5789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.086 [INFO][5789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.091 [WARNING][5789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" HandleID="k8s-pod-network.c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Workload="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.091 [INFO][5789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" HandleID="k8s-pod-network.c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Workload="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.092 [INFO][5789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:51.097539 containerd[1459]: 2025-09-09 00:33:51.094 [INFO][5779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:51.101291 containerd[1459]: time="2025-09-09T00:33:51.097573301Z" level=info msg="TearDown network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\" successfully" Sep 9 00:33:51.101291 containerd[1459]: time="2025-09-09T00:33:51.097610653Z" level=info msg="StopPodSandbox for \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\" returns successfully" Sep 9 00:33:51.101291 containerd[1459]: time="2025-09-09T00:33:51.098184168Z" level=info msg="RemovePodSandbox for \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\"" Sep 9 00:33:51.101291 containerd[1459]: time="2025-09-09T00:33:51.098213905Z" level=info msg="Forcibly stopping sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\"" Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.130 [WARNING][5808] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" WorkloadEndpoint="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.130 [INFO][5808] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.130 [INFO][5808] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" iface="eth0" netns="" Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.130 [INFO][5808] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.130 [INFO][5808] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.153 [INFO][5817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" HandleID="k8s-pod-network.c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Workload="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.153 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.153 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.159 [WARNING][5817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" HandleID="k8s-pod-network.c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Workload="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.159 [INFO][5817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" HandleID="k8s-pod-network.c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Workload="localhost-k8s-whisker--7fcccd97b5--2x7ph-eth0" Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.160 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:51.166455 containerd[1459]: 2025-09-09 00:33:51.163 [INFO][5808] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a" Sep 9 00:33:51.167044 containerd[1459]: time="2025-09-09T00:33:51.166511114Z" level=info msg="TearDown network for sandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\" successfully" Sep 9 00:33:51.278719 containerd[1459]: time="2025-09-09T00:33:51.278663902Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:51.278804 containerd[1459]: time="2025-09-09T00:33:51.278726843Z" level=info msg="RemovePodSandbox \"c019fa81804aac6f2d05c25d964b9ed17c3761d74217123bee3e7edb8ce4918a\" returns successfully" Sep 9 00:33:52.871693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088039051.mount: Deactivated successfully. Sep 9 00:33:52.889382 containerd[1459]: time="2025-09-09T00:33:52.889320371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:52.890232 containerd[1459]: time="2025-09-09T00:33:52.890154384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:33:52.891391 containerd[1459]: time="2025-09-09T00:33:52.891360497Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:52.893560 containerd[1459]: time="2025-09-09T00:33:52.893534227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:52.900239 containerd[1459]: time="2025-09-09T00:33:52.900206337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.843715616s" Sep 9 00:33:52.900239 containerd[1459]: time="2025-09-09T00:33:52.900236354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:33:52.901489 containerd[1459]: time="2025-09-09T00:33:52.901182351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:33:52.905478 containerd[1459]: time="2025-09-09T00:33:52.905428769Z" level=info msg="CreateContainer within sandbox \"e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:33:52.916863 containerd[1459]: time="2025-09-09T00:33:52.916805232Z" level=info msg="CreateContainer within sandbox \"e41eaeb16a3364e01df5c88d0a0d466ce33b8260e0048c47c3685a1d67126957\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bf47bc882bc2ce34e9717592b3d760fa56f569f54bbeab56de7008694767516f\"" Sep 9 00:33:52.918135 containerd[1459]: time="2025-09-09T00:33:52.917575993Z" level=info msg="StartContainer for \"bf47bc882bc2ce34e9717592b3d760fa56f569f54bbeab56de7008694767516f\"" Sep 9 00:33:52.945691 systemd[1]: Started cri-containerd-bf47bc882bc2ce34e9717592b3d760fa56f569f54bbeab56de7008694767516f.scope - libcontainer container bf47bc882bc2ce34e9717592b3d760fa56f569f54bbeab56de7008694767516f. Sep 9 00:33:52.994102 containerd[1459]: time="2025-09-09T00:33:52.994027922Z" level=info msg="StartContainer for \"bf47bc882bc2ce34e9717592b3d760fa56f569f54bbeab56de7008694767516f\" returns successfully" Sep 9 00:33:53.381979 kubelet[2558]: I0909 00:33:53.381499 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-qv7nj" podStartSLOduration=35.583684915 podStartE2EDuration="46.381456473s" podCreationTimestamp="2025-09-09 00:33:07 +0000 UTC" firstStartedPulling="2025-09-09 00:33:39.258412396 +0000 UTC m=+51.536537607" lastFinishedPulling="2025-09-09 00:33:50.056183955 +0000 UTC m=+62.334309165" observedRunningTime="2025-09-09 00:33:51.410614322 +0000 UTC m=+63.688739562" watchObservedRunningTime="2025-09-09 00:33:53.381456473 +0000 UTC m=+65.659581683" Sep 9 00:33:53.381979 kubelet[2558]: I0909 00:33:53.381708 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-679f97cfbd-8pkmh" podStartSLOduration=2.032926056 podStartE2EDuration="17.381700339s" podCreationTimestamp="2025-09-09 00:33:36 +0000 UTC" firstStartedPulling="2025-09-09 00:33:37.552262841 +0000 UTC m=+49.830388051" lastFinishedPulling="2025-09-09 00:33:52.901037124 +0000 UTC m=+65.179162334" observedRunningTime="2025-09-09 00:33:53.380994232 +0000 UTC m=+65.659119442" watchObservedRunningTime="2025-09-09 00:33:53.381700339 +0000 UTC m=+65.659825549" Sep 9 00:33:55.797439 containerd[1459]: time="2025-09-09T00:33:55.797153627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:55.798210 containerd[1459]: time="2025-09-09T00:33:55.798136722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:33:55.799387 containerd[1459]: time="2025-09-09T00:33:55.799347360Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:55.801394 containerd[1459]: time="2025-09-09T00:33:55.801352694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:55.801968 containerd[1459]: time="2025-09-09T00:33:55.801931057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.900721633s" Sep 9 00:33:55.801968 containerd[1459]: time="2025-09-09T00:33:55.801966795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:33:55.803089 containerd[1459]: time="2025-09-09T00:33:55.802983764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:33:55.825959 containerd[1459]: time="2025-09-09T00:33:55.825918198Z" level=info msg="CreateContainer within sandbox \"d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:33:55.842274 containerd[1459]: time="2025-09-09T00:33:55.842205295Z" level=info msg="CreateContainer within sandbox \"d4cc5ef1f757b5a2a2fece0c1077e452c87941e3ca64f547c6b525e287631b74\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d6af50b9fccb7ac0b0cc3c44fdae665d9b06989c2de9cc3eed1d79bc76a38aa7\"" Sep 9 00:33:55.842805 containerd[1459]: time="2025-09-09T00:33:55.842771815Z" level=info msg="StartContainer for \"d6af50b9fccb7ac0b0cc3c44fdae665d9b06989c2de9cc3eed1d79bc76a38aa7\"" Sep 9 00:33:55.908703 systemd[1]: Started cri-containerd-d6af50b9fccb7ac0b0cc3c44fdae665d9b06989c2de9cc3eed1d79bc76a38aa7.scope - libcontainer container d6af50b9fccb7ac0b0cc3c44fdae665d9b06989c2de9cc3eed1d79bc76a38aa7. Sep 9 00:33:55.961442 containerd[1459]: time="2025-09-09T00:33:55.961383493Z" level=info msg="StartContainer for \"d6af50b9fccb7ac0b0cc3c44fdae665d9b06989c2de9cc3eed1d79bc76a38aa7\" returns successfully" Sep 9 00:33:56.058639 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:55366.service - OpenSSH per-connection server daemon (10.0.0.1:55366). Sep 9 00:33:56.110802 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 55366 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:33:56.113276 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:56.118398 systemd-logind[1440]: New session 17 of user core. Sep 9 00:33:56.129752 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:33:56.624350 kubelet[2558]: I0909 00:33:56.623321 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f84b7d74d-mzdfw" podStartSLOduration=33.281160427 podStartE2EDuration="48.623300943s" podCreationTimestamp="2025-09-09 00:33:08 +0000 UTC" firstStartedPulling="2025-09-09 00:33:40.46070755 +0000 UTC m=+52.738832750" lastFinishedPulling="2025-09-09 00:33:55.802848055 +0000 UTC m=+68.080973266" observedRunningTime="2025-09-09 00:33:56.622633881 +0000 UTC m=+68.900759092" watchObservedRunningTime="2025-09-09 00:33:56.623300943 +0000 UTC m=+68.901426153" Sep 9 00:33:56.697983 sshd[5980]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:56.703296 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:55366.service: Deactivated successfully. Sep 9 00:33:56.706911 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:33:56.708764 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:33:56.709767 systemd-logind[1440]: Removed session 17. Sep 9 00:33:58.036130 containerd[1459]: time="2025-09-09T00:33:58.036073067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:58.037212 containerd[1459]: time="2025-09-09T00:33:58.037156921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:33:58.038735 containerd[1459]: time="2025-09-09T00:33:58.038701142Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:58.041588 containerd[1459]: time="2025-09-09T00:33:58.041520111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:58.042323 containerd[1459]: time="2025-09-09T00:33:58.042288825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.239277347s" Sep 9 00:33:58.042375 containerd[1459]: time="2025-09-09T00:33:58.042328049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:33:58.049167 containerd[1459]: time="2025-09-09T00:33:58.049111870Z" level=info msg="CreateContainer within sandbox \"49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:33:58.071454 containerd[1459]: time="2025-09-09T00:33:58.071393857Z" level=info msg="CreateContainer within sandbox \"49c12d23ecde92780fdb8af021da2fd2520d92ff03ae56d2ff768ed48d541eac\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"530cb18a2a7224c85d336cd66fb9691f9cea8fe67af99c1861698e396141f62a\"" Sep 9 00:33:58.074670 containerd[1459]: time="2025-09-09T00:33:58.074499622Z" level=info msg="StartContainer for \"530cb18a2a7224c85d336cd66fb9691f9cea8fe67af99c1861698e396141f62a\"" Sep 9 00:33:58.132671 systemd[1]: Started cri-containerd-530cb18a2a7224c85d336cd66fb9691f9cea8fe67af99c1861698e396141f62a.scope - libcontainer container 530cb18a2a7224c85d336cd66fb9691f9cea8fe67af99c1861698e396141f62a. Sep 9 00:33:58.175984 containerd[1459]: time="2025-09-09T00:33:58.175910326Z" level=info msg="StartContainer for \"530cb18a2a7224c85d336cd66fb9691f9cea8fe67af99c1861698e396141f62a\" returns successfully" Sep 9 00:33:58.401318 kubelet[2558]: I0909 00:33:58.400660 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5mndv" podStartSLOduration=30.773365374 podStartE2EDuration="50.400636791s" podCreationTimestamp="2025-09-09 00:33:08 +0000 UTC" firstStartedPulling="2025-09-09 00:33:38.416134826 +0000 UTC m=+50.694260036" lastFinishedPulling="2025-09-09 00:33:58.043406243 +0000 UTC m=+70.321531453" observedRunningTime="2025-09-09 00:33:58.399853409 +0000 UTC m=+70.677978620" watchObservedRunningTime="2025-09-09 00:33:58.400636791 +0000 UTC m=+70.678762001" Sep 9 00:33:59.500313 kubelet[2558]: I0909 00:33:59.500266 2558 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:33:59.501566 kubelet[2558]: I0909 00:33:59.501548 2558 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:34:01.711606 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:52854.service - OpenSSH per-connection server daemon (10.0.0.1:52854). Sep 9 00:34:01.768549 sshd[6063]: Accepted publickey for core from 10.0.0.1 port 52854 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:34:01.771363 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:01.776453 systemd-logind[1440]: New session 18 of user core. Sep 9 00:34:01.784655 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:34:01.985880 sshd[6063]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:01.991049 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:52854.service: Deactivated successfully. Sep 9 00:34:01.993922 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:34:01.995060 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:34:01.996243 systemd-logind[1440]: Removed session 18. Sep 9 00:34:02.822919 kubelet[2558]: E0909 00:34:02.822867 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:34:06.998016 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:52868.service - OpenSSH per-connection server daemon (10.0.0.1:52868). Sep 9 00:34:07.043635 sshd[6078]: Accepted publickey for core from 10.0.0.1 port 52868 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:34:07.045751 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:07.050704 systemd-logind[1440]: New session 19 of user core. Sep 9 00:34:07.060668 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:34:07.184318 sshd[6078]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:07.195408 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:52868.service: Deactivated successfully. Sep 9 00:34:07.198298 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:34:07.202253 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:34:07.208871 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:52874.service - OpenSSH per-connection server daemon (10.0.0.1:52874). Sep 9 00:34:07.220811 systemd-logind[1440]: Removed session 19. Sep 9 00:34:07.245503 sshd[6100]: Accepted publickey for core from 10.0.0.1 port 52874 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:34:07.247430 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:07.252065 systemd-logind[1440]: New session 20 of user core. Sep 9 00:34:07.258730 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:34:07.730231 sshd[6100]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:07.741649 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:52874.service: Deactivated successfully. Sep 9 00:34:07.743500 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:34:07.744407 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:34:07.752788 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:52884.service - OpenSSH per-connection server daemon (10.0.0.1:52884). Sep 9 00:34:07.753373 systemd-logind[1440]: Removed session 20. Sep 9 00:34:07.795057 sshd[6129]: Accepted publickey for core from 10.0.0.1 port 52884 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:34:07.797166 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:07.802342 systemd-logind[1440]: New session 21 of user core. Sep 9 00:34:07.817789 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:34:08.578375 sshd[6129]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:08.590305 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:52884.service: Deactivated successfully. Sep 9 00:34:08.592390 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:34:08.593116 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:34:08.595642 systemd-logind[1440]: Removed session 21. Sep 9 00:34:08.604492 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:52890.service - OpenSSH per-connection server daemon (10.0.0.1:52890). Sep 9 00:34:08.638557 sshd[6149]: Accepted publickey for core from 10.0.0.1 port 52890 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:34:08.640387 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:08.645559 systemd-logind[1440]: New session 22 of user core. Sep 9 00:34:08.653710 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:34:09.103737 sshd[6149]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:09.114743 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:52890.service: Deactivated successfully. Sep 9 00:34:09.118437 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:34:09.125129 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:34:09.136239 systemd[1]: Started sshd@22-10.0.0.144:22-10.0.0.1:52904.service - OpenSSH per-connection server daemon (10.0.0.1:52904). Sep 9 00:34:09.138083 systemd-logind[1440]: Removed session 22. Sep 9 00:34:09.173880 sshd[6163]: Accepted publickey for core from 10.0.0.1 port 52904 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:34:09.176617 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:09.191270 systemd-logind[1440]: New session 23 of user core. Sep 9 00:34:09.194685 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:34:09.368767 sshd[6163]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:09.374141 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:52904.service: Deactivated successfully. Sep 9 00:34:09.376812 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:34:09.377685 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:34:09.378842 systemd-logind[1440]: Removed session 23. Sep 9 00:34:12.823189 kubelet[2558]: E0909 00:34:12.823114 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:34:14.384569 systemd[1]: Started sshd@23-10.0.0.144:22-10.0.0.1:48686.service - OpenSSH per-connection server daemon (10.0.0.1:48686). Sep 9 00:34:14.436701 sshd[6178]: Accepted publickey for core from 10.0.0.1 port 48686 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:34:14.438735 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:14.443177 systemd-logind[1440]: New session 24 of user core. Sep 9 00:34:14.451603 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:34:14.687234 sshd[6178]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:14.691496 systemd[1]: sshd@23-10.0.0.144:22-10.0.0.1:48686.service: Deactivated successfully. Sep 9 00:34:14.694216 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:34:14.695001 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:34:14.695989 systemd-logind[1440]: Removed session 24. Sep 9 00:34:15.823723 kubelet[2558]: E0909 00:34:15.823675 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:34:19.705807 systemd[1]: Started sshd@24-10.0.0.144:22-10.0.0.1:48688.service - OpenSSH per-connection server daemon (10.0.0.1:48688). Sep 9 00:34:19.747500 sshd[6219]: Accepted publickey for core from 10.0.0.1 port 48688 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:34:19.749447 sshd[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:19.753825 systemd-logind[1440]: New session 25 of user core. Sep 9 00:34:19.760614 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:34:19.824170 kubelet[2558]: E0909 00:34:19.822884 2558 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:34:19.917767 kubelet[2558]: I0909 00:34:19.917729 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:34:19.962692 sshd[6219]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:19.967598 systemd[1]: sshd@24-10.0.0.144:22-10.0.0.1:48688.service: Deactivated successfully. Sep 9 00:34:19.969978 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:34:19.970867 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:34:19.972082 systemd-logind[1440]: Removed session 25. Sep 9 00:34:24.976184 systemd[1]: Started sshd@25-10.0.0.144:22-10.0.0.1:58788.service - OpenSSH per-connection server daemon (10.0.0.1:58788). Sep 9 00:34:25.016224 sshd[6265]: Accepted publickey for core from 10.0.0.1 port 58788 ssh2: RSA SHA256:LXnX38+Ld2WFs+27NUMweVJjFK11ltvOlPZ2mP3Dpgo Sep 9 00:34:25.018304 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:25.022717 systemd-logind[1440]: New session 26 of user core. Sep 9 00:34:25.032601 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:34:25.161038 sshd[6265]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:25.166990 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:34:25.168133 systemd[1]: sshd@25-10.0.0.144:22-10.0.0.1:58788.service: Deactivated successfully. Sep 9 00:34:25.172056 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:34:25.173980 systemd-logind[1440]: Removed session 26.