Sep 12 17:34:13.889569 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:34:13.889591 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:34:13.889602 kernel: BIOS-provided physical RAM map: Sep 12 17:34:13.889609 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:34:13.889615 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:34:13.889621 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:34:13.889628 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 17:34:13.889635 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 17:34:13.889641 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 17:34:13.889649 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 17:34:13.889656 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:34:13.889662 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:34:13.889668 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:34:13.889674 kernel: NX (Execute Disable) protection: active Sep 12 17:34:13.889682 kernel: APIC: Static calls initialized Sep 12 17:34:13.889696 kernel: SMBIOS 2.8 present. Sep 12 17:34:13.889703 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 17:34:13.889709 kernel: Hypervisor detected: KVM Sep 12 17:34:13.889716 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:34:13.889723 kernel: kvm-clock: using sched offset of 3640671569 cycles Sep 12 17:34:13.889730 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:34:13.889737 kernel: tsc: Detected 2794.750 MHz processor Sep 12 17:34:13.889744 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:34:13.889751 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:34:13.889758 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 17:34:13.889768 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:34:13.889775 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:34:13.889782 kernel: Using GB pages for direct mapping Sep 12 17:34:13.889789 kernel: ACPI: Early table checksum verification disabled Sep 12 17:34:13.889796 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 17:34:13.889803 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:13.889810 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:13.889817 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:13.889826 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 17:34:13.889833 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:13.889840 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:13.889847 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:13.889854 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:34:13.889860 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 17:34:13.889867 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 17:34:13.889878 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 17:34:13.889889 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 17:34:13.889896 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 17:34:13.889903 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 17:34:13.889910 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 17:34:13.889917 kernel: No NUMA configuration found Sep 12 17:34:13.889924 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 17:34:13.889931 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 12 17:34:13.889941 kernel: Zone ranges: Sep 12 17:34:13.889948 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:34:13.889956 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 17:34:13.889963 kernel: Normal empty Sep 12 17:34:13.889970 kernel: Movable zone start for each node Sep 12 17:34:13.889977 kernel: Early memory node ranges Sep 12 17:34:13.889984 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:34:13.889991 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 17:34:13.889998 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 17:34:13.890008 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:34:13.890015 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:34:13.890022 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 17:34:13.890029 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:34:13.890036 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:34:13.890044 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:34:13.890051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:34:13.890058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:34:13.890065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:34:13.890074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:34:13.890082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:34:13.890089 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:34:13.890096 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:34:13.890103 kernel: TSC deadline timer available Sep 12 17:34:13.890110 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 17:34:13.890117 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:34:13.890124 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:34:13.890134 kernel: kvm-guest: setup PV sched yield Sep 12 17:34:13.890143 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 17:34:13.890151 kernel: Booting paravirtualized kernel on KVM Sep 12 17:34:13.890158 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:34:13.890165 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:34:13.890180 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 17:34:13.890187 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 17:34:13.890195 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:34:13.890202 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:34:13.890209 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:34:13.890220 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:34:13.890228 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:34:13.890235 kernel: random: crng init done Sep 12 17:34:13.890242 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:34:13.890249 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:34:13.890257 kernel: Fallback order for Node 0: 0 Sep 12 17:34:13.890276 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 12 17:34:13.890283 kernel: Policy zone: DMA32 Sep 12 17:34:13.890293 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:34:13.890300 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136900K reserved, 0K cma-reserved) Sep 12 17:34:13.890307 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:34:13.890315 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:34:13.890322 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:34:13.890329 kernel: Dynamic Preempt: voluntary Sep 12 17:34:13.890336 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:34:13.890344 kernel: rcu: RCU event tracing is enabled. Sep 12 17:34:13.890352 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:34:13.890361 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:34:13.890369 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:34:13.890376 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:34:13.890383 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:34:13.890390 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:34:13.890398 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:34:13.890405 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:34:13.890412 kernel: Console: colour VGA+ 80x25 Sep 12 17:34:13.890419 kernel: printk: console [ttyS0] enabled Sep 12 17:34:13.890428 kernel: ACPI: Core revision 20230628 Sep 12 17:34:13.890436 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:34:13.890443 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:34:13.890450 kernel: x2apic enabled Sep 12 17:34:13.890457 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:34:13.890464 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:34:13.890472 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:34:13.890479 kernel: kvm-guest: setup PV IPIs Sep 12 17:34:13.890496 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:34:13.890504 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:34:13.890511 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 12 17:34:13.890519 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:34:13.890528 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:34:13.890536 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:34:13.890543 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:34:13.890551 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:34:13.890558 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:34:13.890568 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:34:13.890576 kernel: active return thunk: retbleed_return_thunk Sep 12 17:34:13.890586 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:34:13.890594 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:34:13.890601 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:34:13.890609 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:34:13.890617 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:34:13.890624 kernel: active return thunk: srso_return_thunk Sep 12 17:34:13.890634 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:34:13.890642 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:34:13.890649 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:34:13.890657 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:34:13.890665 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:34:13.890672 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:34:13.890680 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:34:13.890687 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:34:13.890695 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:34:13.890704 kernel: landlock: Up and running. Sep 12 17:34:13.890712 kernel: SELinux: Initializing. Sep 12 17:34:13.890719 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:34:13.890727 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:34:13.890735 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:34:13.890742 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:34:13.890750 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:34:13.890757 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:34:13.890765 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:34:13.890775 kernel: ... version: 0 Sep 12 17:34:13.890783 kernel: ... bit width: 48 Sep 12 17:34:13.890790 kernel: ... generic registers: 6 Sep 12 17:34:13.890798 kernel: ... value mask: 0000ffffffffffff Sep 12 17:34:13.890805 kernel: ... max period: 00007fffffffffff Sep 12 17:34:13.890812 kernel: ... fixed-purpose events: 0 Sep 12 17:34:13.890820 kernel: ... event mask: 000000000000003f Sep 12 17:34:13.890827 kernel: signal: max sigframe size: 1776 Sep 12 17:34:13.890835 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:34:13.890845 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:34:13.890852 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:34:13.890860 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:34:13.890868 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:34:13.890875 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:34:13.890882 kernel: smpboot: Max logical packages: 1 Sep 12 17:34:13.890890 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 12 17:34:13.890898 kernel: devtmpfs: initialized Sep 12 17:34:13.890905 kernel: x86/mm: Memory block size: 128MB Sep 12 17:34:13.890915 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:34:13.890923 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:34:13.890930 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:34:13.890938 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:34:13.890945 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:34:13.890953 kernel: audit: type=2000 audit(1757698453.305:1): state=initialized audit_enabled=0 res=1 Sep 12 17:34:13.890960 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:34:13.890968 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:34:13.890975 kernel: cpuidle: using governor menu Sep 12 17:34:13.890985 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:34:13.890993 kernel: dca service started, version 1.12.1 Sep 12 17:34:13.891000 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 17:34:13.891008 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 17:34:13.891016 kernel: PCI: Using configuration type 1 for base access Sep 12 17:34:13.891023 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:34:13.891031 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:34:13.891038 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:34:13.891046 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:34:13.891056 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:34:13.891063 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:34:13.891071 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:34:13.891078 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:34:13.891086 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:34:13.891093 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:34:13.891101 kernel: ACPI: Interpreter enabled Sep 12 17:34:13.891108 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:34:13.891116 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:34:13.891126 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:34:13.891134 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:34:13.891141 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:34:13.891149 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:34:13.891395 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:34:13.891537 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:34:13.891663 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:34:13.891678 kernel: PCI host bridge to bus 0000:00 Sep 12 17:34:13.891817 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:34:13.891935 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:34:13.892049 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:34:13.892166 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 17:34:13.892308 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:34:13.892423 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 17:34:13.892549 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:34:13.892706 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:34:13.892853 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 17:34:13.892980 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 12 17:34:13.893105 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 12 17:34:13.893248 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 12 17:34:13.893426 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:34:13.893575 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:34:13.893702 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 12 17:34:13.893829 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 12 17:34:13.893957 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 17:34:13.894103 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:34:13.894239 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 17:34:13.894389 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 12 17:34:13.894521 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 17:34:13.894664 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:34:13.894792 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 12 17:34:13.894920 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 12 17:34:13.895046 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 17:34:13.895180 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 12 17:34:13.895341 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:34:13.895475 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:34:13.895616 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:34:13.895742 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 12 17:34:13.895866 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 12 17:34:13.896009 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:34:13.896136 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 17:34:13.896150 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:34:13.896158 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:34:13.896175 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:34:13.896185 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:34:13.896193 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:34:13.896203 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:34:13.896211 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:34:13.896218 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:34:13.896226 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:34:13.896237 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:34:13.896244 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:34:13.896252 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:34:13.896274 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:34:13.896282 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:34:13.896289 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:34:13.896297 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:34:13.896305 kernel: iommu: Default domain type: Translated Sep 12 17:34:13.896312 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:34:13.896323 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:34:13.896331 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:34:13.896338 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:34:13.896346 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 17:34:13.896475 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:34:13.896602 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:34:13.896727 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:34:13.896737 kernel: vgaarb: loaded Sep 12 17:34:13.896748 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:34:13.896756 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:34:13.896764 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:34:13.896771 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:34:13.896779 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:34:13.896787 kernel: pnp: PnP ACPI init Sep 12 17:34:13.896938 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 17:34:13.896949 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:34:13.896961 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:34:13.896969 kernel: NET: Registered PF_INET protocol family Sep 12 17:34:13.896976 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:34:13.896984 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:34:13.896992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:34:13.896999 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:34:13.897007 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:34:13.897014 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:34:13.897022 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:34:13.897032 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:34:13.897039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:34:13.897047 kernel: NET: Registered PF_XDP protocol family Sep 12 17:34:13.897164 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:34:13.897317 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:34:13.897432 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:34:13.897546 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 17:34:13.897659 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 17:34:13.897772 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 17:34:13.897787 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:34:13.897794 kernel: Initialise system trusted keyrings Sep 12 17:34:13.897802 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:34:13.897810 kernel: Key type asymmetric registered Sep 12 17:34:13.897817 kernel: Asymmetric key parser 'x509' registered Sep 12 17:34:13.897825 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:34:13.897832 kernel: io scheduler mq-deadline registered Sep 12 17:34:13.897840 kernel: io scheduler kyber registered Sep 12 17:34:13.897848 kernel: io scheduler bfq registered Sep 12 17:34:13.897858 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:34:13.897866 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:34:13.897873 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:34:13.897881 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:34:13.897889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:34:13.897896 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:34:13.897904 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:34:13.897911 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:34:13.897919 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:34:13.898067 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:34:13.898079 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:34:13.898206 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:34:13.898365 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:34:13 UTC (1757698453) Sep 12 17:34:13.898484 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 17:34:13.898494 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:34:13.898502 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:34:13.898514 kernel: Segment Routing with IPv6 Sep 12 17:34:13.898522 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:34:13.898529 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:34:13.898537 kernel: Key type dns_resolver registered Sep 12 17:34:13.898544 kernel: IPI shorthand broadcast: enabled Sep 12 17:34:13.898552 kernel: sched_clock: Marking stable (739002835, 105651386)->(905819150, -61164929) Sep 12 17:34:13.898559 kernel: registered taskstats version 1 Sep 12 17:34:13.898567 kernel: Loading compiled-in X.509 certificates Sep 12 17:34:13.898574 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:34:13.898585 kernel: Key type .fscrypt registered Sep 12 17:34:13.898592 kernel: Key type fscrypt-provisioning registered Sep 12 17:34:13.898600 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:34:13.898607 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:34:13.898614 kernel: ima: No architecture policies found Sep 12 17:34:13.898622 kernel: clk: Disabling unused clocks Sep 12 17:34:13.898630 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:34:13.898637 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:34:13.898645 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:34:13.898655 kernel: Run /init as init process Sep 12 17:34:13.898662 kernel: with arguments: Sep 12 17:34:13.898670 kernel: /init Sep 12 17:34:13.898677 kernel: with environment: Sep 12 17:34:13.898684 kernel: HOME=/ Sep 12 17:34:13.898692 kernel: TERM=linux Sep 12 17:34:13.898699 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:34:13.898709 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:34:13.898721 systemd[1]: Detected virtualization kvm. Sep 12 17:34:13.898729 systemd[1]: Detected architecture x86-64. Sep 12 17:34:13.898737 systemd[1]: Running in initrd. Sep 12 17:34:13.898745 systemd[1]: No hostname configured, using default hostname. Sep 12 17:34:13.898753 systemd[1]: Hostname set to . Sep 12 17:34:13.898761 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:34:13.898769 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:34:13.898777 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:13.898788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:13.898797 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:34:13.898818 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:34:13.898829 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:34:13.898838 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:34:13.898850 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:34:13.898858 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:34:13.898867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:13.898875 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:13.898883 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:34:13.898891 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:34:13.898900 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:34:13.898908 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:34:13.898919 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:34:13.898927 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:34:13.898935 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:34:13.898944 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:34:13.898952 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:13.898960 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:13.898968 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:13.898977 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:34:13.898985 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:34:13.898996 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:34:13.899004 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:34:13.899012 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:34:13.899020 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:34:13.899029 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:34:13.899037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:13.899045 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:34:13.899053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:13.899064 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:34:13.899073 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:34:13.899082 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:34:13.899110 systemd-journald[193]: Collecting audit messages is disabled. Sep 12 17:34:13.899129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:34:13.899140 systemd-journald[193]: Journal started Sep 12 17:34:13.899158 systemd-journald[193]: Runtime Journal (/run/log/journal/e1bfdd4ab48e48309a5edf3bf950195d) is 6.0M, max 48.4M, 42.3M free. Sep 12 17:34:13.885604 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 17:34:13.923841 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:34:13.923868 kernel: Bridge firewalling registered Sep 12 17:34:13.912500 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 17:34:13.930371 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:34:13.931598 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:13.933820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:13.938401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:13.957395 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:34:13.960397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:34:13.962963 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:34:13.972894 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:13.979116 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:13.981963 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:13.990469 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:34:13.992246 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:34:14.005663 dracut-cmdline[230]: dracut-dracut-053 Sep 12 17:34:14.009244 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:34:14.026972 systemd-resolved[232]: Positive Trust Anchors: Sep 12 17:34:14.026991 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:34:14.027022 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:34:14.029674 systemd-resolved[232]: Defaulting to hostname 'linux'. Sep 12 17:34:14.030767 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:34:14.035570 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:14.110294 kernel: SCSI subsystem initialized Sep 12 17:34:14.119288 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:34:14.130301 kernel: iscsi: registered transport (tcp) Sep 12 17:34:14.150619 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:34:14.150708 kernel: QLogic iSCSI HBA Driver Sep 12 17:34:14.203243 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:34:14.214409 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:34:14.239307 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:34:14.239374 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:34:14.239390 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:34:14.282303 kernel: raid6: avx2x4 gen() 30395 MB/s Sep 12 17:34:14.299302 kernel: raid6: avx2x2 gen() 30949 MB/s Sep 12 17:34:14.316315 kernel: raid6: avx2x1 gen() 25913 MB/s Sep 12 17:34:14.316343 kernel: raid6: using algorithm avx2x2 gen() 30949 MB/s Sep 12 17:34:14.334353 kernel: raid6: .... xor() 19582 MB/s, rmw enabled Sep 12 17:34:14.334431 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:34:14.354296 kernel: xor: automatically using best checksumming function avx Sep 12 17:34:14.508313 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:34:14.523697 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:34:14.533392 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:14.551982 systemd-udevd[415]: Using default interface naming scheme 'v255'. Sep 12 17:34:14.556723 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:14.557719 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:34:14.576592 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Sep 12 17:34:14.612569 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:34:14.630462 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:34:14.706190 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:14.720191 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:34:14.732351 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:34:14.735302 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:34:14.735830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:14.736179 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:34:14.749189 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:34:14.750765 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:34:14.757749 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:34:14.758420 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:34:14.771303 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:34:14.771380 kernel: AES CTR mode by8 optimization enabled Sep 12 17:34:14.776061 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:34:14.772644 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:34:14.779485 kernel: GPT:9289727 != 19775487 Sep 12 17:34:14.779502 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:34:14.779513 kernel: GPT:9289727 != 19775487 Sep 12 17:34:14.779523 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:34:14.779534 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:14.775778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:34:14.775905 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:14.779777 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:34:14.780102 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:14.780239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:14.780783 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:14.797083 kernel: libata version 3.00 loaded. Sep 12 17:34:14.796623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:14.814296 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:34:14.814520 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Sep 12 17:34:14.816296 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (478) Sep 12 17:34:14.817283 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:34:14.818282 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:34:14.818455 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:34:14.825641 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:34:14.860971 kernel: scsi host0: ahci Sep 12 17:34:14.861165 kernel: scsi host1: ahci Sep 12 17:34:14.861339 kernel: scsi host2: ahci Sep 12 17:34:14.861490 kernel: scsi host3: ahci Sep 12 17:34:14.861639 kernel: scsi host4: ahci Sep 12 17:34:14.862956 kernel: scsi host5: ahci Sep 12 17:34:14.863107 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 12 17:34:14.863119 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 12 17:34:14.863129 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 12 17:34:14.863154 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 12 17:34:14.863165 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 12 17:34:14.863175 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 12 17:34:14.861581 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:14.882772 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:34:14.892466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:34:14.900839 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:34:14.903482 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:34:14.918562 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:34:14.922135 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:34:14.928325 disk-uuid[560]: Primary Header is updated. Sep 12 17:34:14.928325 disk-uuid[560]: Secondary Entries is updated. Sep 12 17:34:14.928325 disk-uuid[560]: Secondary Header is updated. Sep 12 17:34:14.932281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:14.936281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:14.953118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:15.135302 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:15.144032 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:15.144158 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:34:15.144171 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:15.144181 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:15.145305 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:34:15.146300 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:34:15.146318 kernel: ata3.00: applying bridge limits Sep 12 17:34:15.147289 kernel: ata3.00: configured for UDMA/100 Sep 12 17:34:15.149285 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:34:15.193734 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:34:15.193955 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:34:15.206288 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:34:15.938402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:34:15.938474 disk-uuid[562]: The operation has completed successfully. Sep 12 17:34:15.970379 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:34:15.970538 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:34:15.999614 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:34:16.004088 sh[594]: Success Sep 12 17:34:16.019469 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:34:16.056190 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:34:16.070003 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:34:16.075901 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:34:16.087288 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:34:16.087330 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:16.089523 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:34:16.089540 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:34:16.090428 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:34:16.097282 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:34:16.097759 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:34:16.107397 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:34:16.109681 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:34:16.120379 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:16.120457 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:16.120478 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:34:16.124302 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:34:16.134796 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:34:16.137056 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:16.150322 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:34:16.159416 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:34:16.575405 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:34:16.584444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:34:16.604148 ignition[686]: Ignition 2.19.0 Sep 12 17:34:16.604161 ignition[686]: Stage: fetch-offline Sep 12 17:34:16.604227 ignition[686]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:16.604241 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:16.604398 ignition[686]: parsed url from cmdline: "" Sep 12 17:34:16.604403 ignition[686]: no config URL provided Sep 12 17:34:16.604410 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:34:16.604422 ignition[686]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:34:16.604457 ignition[686]: op(1): [started] loading QEMU firmware config module Sep 12 17:34:16.604464 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:34:16.614716 ignition[686]: op(1): [finished] loading QEMU firmware config module Sep 12 17:34:16.616928 systemd-networkd[781]: lo: Link UP Sep 12 17:34:16.616939 systemd-networkd[781]: lo: Gained carrier Sep 12 17:34:16.618625 systemd-networkd[781]: Enumeration completed Sep 12 17:34:16.619036 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:16.619040 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:34:16.627641 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:34:16.629107 systemd-networkd[781]: eth0: Link UP Sep 12 17:34:16.629118 systemd-networkd[781]: eth0: Gained carrier Sep 12 17:34:16.629127 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:16.634155 systemd[1]: Reached target network.target - Network. Sep 12 17:34:16.642314 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:34:16.668935 ignition[686]: parsing config with SHA512: 3480157270c3b84322ab91b852bd722412a3e8bde94bb02212291c9eea81661f9973e025655c712f5dc22104617917e965ca32f6454b1401a2af4a47712b3e6d Sep 12 17:34:16.674624 unknown[686]: fetched base config from "system" Sep 12 17:34:16.675862 ignition[686]: fetch-offline: fetch-offline passed Sep 12 17:34:16.674638 unknown[686]: fetched user config from "qemu" Sep 12 17:34:16.675934 ignition[686]: Ignition finished successfully Sep 12 17:34:16.681459 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:34:16.683926 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:34:16.695434 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:34:16.740433 ignition[787]: Ignition 2.19.0 Sep 12 17:34:16.740445 ignition[787]: Stage: kargs Sep 12 17:34:16.740617 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:16.740629 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:16.741445 ignition[787]: kargs: kargs passed Sep 12 17:34:16.741487 ignition[787]: Ignition finished successfully Sep 12 17:34:16.746023 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:34:16.758388 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:34:16.771284 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.87 Sep 12 17:34:16.771301 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Sep 12 17:34:16.777393 ignition[795]: Ignition 2.19.0 Sep 12 17:34:16.777405 ignition[795]: Stage: disks Sep 12 17:34:16.777617 ignition[795]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:16.777634 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:16.781387 ignition[795]: disks: disks passed Sep 12 17:34:16.781442 ignition[795]: Ignition finished successfully Sep 12 17:34:16.784398 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:34:16.785702 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:34:16.787438 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:34:16.787863 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:34:16.788184 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:34:16.788500 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:34:16.799440 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:34:16.813112 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:34:16.820891 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:34:16.832507 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:34:16.938291 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:34:16.938917 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:34:16.941088 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:34:16.953368 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:34:16.956386 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:34:16.958906 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:34:16.958953 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:34:16.967370 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Sep 12 17:34:16.967395 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:16.967406 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:16.967418 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:34:16.958975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:34:16.967548 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:34:16.973297 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:34:16.973346 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:34:16.975678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:34:17.017067 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:34:17.021314 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:34:17.025624 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:34:17.029538 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:34:17.125464 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:34:17.138427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:34:17.140425 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:34:17.147030 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:34:17.148939 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:17.165922 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:34:17.177679 ignition[927]: INFO : Ignition 2.19.0 Sep 12 17:34:17.177679 ignition[927]: INFO : Stage: mount Sep 12 17:34:17.179419 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:17.179419 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:17.179419 ignition[927]: INFO : mount: mount passed Sep 12 17:34:17.179419 ignition[927]: INFO : Ignition finished successfully Sep 12 17:34:17.184969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:34:17.193359 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:34:17.201643 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:34:17.214681 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 12 17:34:17.214719 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:34:17.214731 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:34:17.216282 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:34:17.219293 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:34:17.220393 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:34:17.253841 ignition[957]: INFO : Ignition 2.19.0 Sep 12 17:34:17.253841 ignition[957]: INFO : Stage: files Sep 12 17:34:17.255662 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:17.255662 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:17.255662 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:34:17.258894 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:34:17.258894 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:34:17.262139 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:34:17.263585 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:34:17.263585 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:34:17.262816 unknown[957]: wrote ssh authorized keys file for user: core Sep 12 17:34:17.267302 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:34:17.267302 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:34:17.312934 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:34:17.754602 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:34:17.754602 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:17.759275 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:34:18.176463 systemd-networkd[781]: eth0: Gained IPv6LL Sep 12 17:34:18.241990 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:34:18.947808 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:34:18.947808 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:34:18.952364 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:34:18.952364 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:34:18.952364 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:34:18.952364 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:34:18.952364 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:34:18.952364 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:34:18.952364 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:34:18.952364 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:34:19.039657 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:34:19.046410 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:34:19.047921 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:34:19.047921 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:34:19.047921 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:34:19.047921 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:34:19.047921 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:34:19.047921 ignition[957]: INFO : files: files passed Sep 12 17:34:19.047921 ignition[957]: INFO : Ignition finished successfully Sep 12 17:34:19.050090 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:34:19.060513 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:34:19.062723 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:34:19.065610 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:34:19.065777 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:34:19.076556 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:34:19.080163 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:19.080163 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:19.083544 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:34:19.083600 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:34:19.086476 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:34:19.097418 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:34:19.126593 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:34:19.126736 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:34:19.129053 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:34:19.129610 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:34:19.129977 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:34:19.133860 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:34:19.157656 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:34:19.171589 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:34:19.186230 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:19.188718 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:19.189309 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:34:19.189759 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:34:19.189915 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:34:19.194773 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:34:19.195328 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:34:19.198496 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:34:19.198936 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:34:19.201915 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:34:19.204241 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:34:19.206876 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:34:19.207578 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:34:19.207953 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:34:19.208431 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:34:19.208920 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:34:19.209295 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:34:19.217419 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:19.217944 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:19.218239 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:34:19.223280 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:19.225906 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:34:19.226089 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:34:19.229237 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:34:19.229611 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:34:19.231180 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:34:19.233510 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:34:19.238343 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:19.239769 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:34:19.242383 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:34:19.244443 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:34:19.244620 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:34:19.246736 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:34:19.246891 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:34:19.248938 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:34:19.249289 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:34:19.251601 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:34:19.251780 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:34:19.263512 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:34:19.264719 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:34:19.264894 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:19.268335 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:34:19.271468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:34:19.271756 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:19.276445 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:34:19.277884 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:34:19.287775 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:34:19.288978 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:34:19.310573 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:34:19.357339 ignition[1012]: INFO : Ignition 2.19.0 Sep 12 17:34:19.357339 ignition[1012]: INFO : Stage: umount Sep 12 17:34:19.359469 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:34:19.359469 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:34:19.359469 ignition[1012]: INFO : umount: umount passed Sep 12 17:34:19.359469 ignition[1012]: INFO : Ignition finished successfully Sep 12 17:34:19.366227 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:34:19.366400 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:34:19.367297 systemd[1]: Stopped target network.target - Network. Sep 12 17:34:19.369835 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:34:19.369898 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:34:19.370194 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:34:19.370240 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:34:19.370580 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:34:19.370635 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:34:19.371204 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:34:19.371292 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:34:19.377532 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:34:19.377991 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:34:19.389578 systemd-networkd[781]: eth0: DHCPv6 lease lost Sep 12 17:34:19.391783 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:34:19.393044 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:34:19.395564 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:34:19.396610 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:34:19.400525 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:34:19.400584 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:19.415446 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:34:19.415961 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:34:19.416049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:34:19.419628 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:34:19.421165 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:19.423237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:34:19.424308 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:19.426384 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:34:19.426436 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:19.431121 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:19.445052 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:34:19.445233 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:34:19.455392 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:34:19.456011 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:19.458221 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:34:19.458308 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:19.460591 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:34:19.460637 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:19.462729 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:34:19.462788 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:34:19.465933 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:34:19.465989 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:34:19.467763 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:34:19.467821 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:34:19.485656 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:34:19.486889 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:34:19.486977 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:19.498025 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:34:19.498119 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:19.500510 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:34:19.500631 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:34:19.565786 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:34:19.565955 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:34:19.567723 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:34:19.570681 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:34:19.570762 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:34:19.584531 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:34:19.593835 systemd[1]: Switching root. Sep 12 17:34:19.632444 systemd-journald[193]: Journal stopped Sep 12 17:34:20.987775 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 12 17:34:20.989240 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:34:20.989289 kernel: SELinux: policy capability open_perms=1 Sep 12 17:34:20.989312 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:34:20.989336 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:34:20.989352 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:34:20.989369 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:34:20.989385 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:34:20.989401 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:34:20.989417 kernel: audit: type=1403 audit(1757698460.137:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:34:20.989435 systemd[1]: Successfully loaded SELinux policy in 40.732ms. Sep 12 17:34:20.989480 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.648ms. Sep 12 17:34:20.989499 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:34:20.989517 systemd[1]: Detected virtualization kvm. Sep 12 17:34:20.989535 systemd[1]: Detected architecture x86-64. Sep 12 17:34:20.989553 systemd[1]: Detected first boot. Sep 12 17:34:20.989570 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:34:20.989587 zram_generator::config[1057]: No configuration found. Sep 12 17:34:20.989606 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:34:20.989623 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:34:20.989644 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:34:20.989662 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:34:20.989681 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:34:20.989699 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:34:20.989717 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:34:20.989734 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:34:20.989754 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:34:20.989772 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:34:20.989794 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:34:20.989825 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:34:20.989842 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:34:20.989859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:34:20.989877 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:34:20.989894 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:34:20.989911 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:34:20.989936 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:34:20.989954 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:34:20.989974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:34:20.990005 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:34:20.990023 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:34:20.990038 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:34:20.990053 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:34:20.990069 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:34:20.990084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:34:20.990099 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:34:20.990118 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:34:20.990133 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:34:20.990150 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:34:20.990166 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:34:20.990181 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:34:20.990196 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:34:20.990211 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:34:20.990234 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:34:20.990250 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:34:20.990284 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:34:20.990301 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:20.990316 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:34:20.990331 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:34:20.990347 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:34:20.990363 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:34:20.990378 systemd[1]: Reached target machines.target - Containers. Sep 12 17:34:20.990396 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:34:20.990417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:20.990435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:34:20.990452 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:34:20.990469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:20.990487 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:34:20.990504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:20.990524 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:34:20.990541 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:20.990559 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:34:20.990580 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:34:20.990597 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:34:20.990615 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:34:20.990631 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:34:20.990648 kernel: fuse: init (API version 7.39) Sep 12 17:34:20.990664 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:34:20.990682 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:34:20.990698 kernel: loop: module loaded Sep 12 17:34:20.990715 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:34:20.990736 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:34:20.990754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:34:20.990771 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:34:20.990788 systemd[1]: Stopped verity-setup.service. Sep 12 17:34:20.990806 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:20.990851 systemd-journald[1127]: Collecting audit messages is disabled. Sep 12 17:34:20.990880 kernel: ACPI: bus type drm_connector registered Sep 12 17:34:20.990902 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:34:20.990919 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:34:20.990939 systemd-journald[1127]: Journal started Sep 12 17:34:20.990979 systemd-journald[1127]: Runtime Journal (/run/log/journal/e1bfdd4ab48e48309a5edf3bf950195d) is 6.0M, max 48.4M, 42.3M free. Sep 12 17:34:20.715773 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:34:20.733718 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:34:20.734335 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:34:20.994711 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:34:20.995290 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:34:20.996830 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:34:20.998538 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:34:20.999965 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:34:21.001514 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:34:21.003246 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:34:21.004984 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:34:21.005213 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:34:21.007060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:21.007314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:21.009015 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:34:21.009220 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:34:21.010623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:21.010807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:21.012697 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:34:21.012888 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:34:21.014387 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:21.014612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:21.016170 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:34:21.018169 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:34:21.020242 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:34:21.039576 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:34:21.052462 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:34:21.055774 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:34:21.057198 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:34:21.057250 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:34:21.059936 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:34:21.062775 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:34:21.068332 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:34:21.069725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:21.072514 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:34:21.076282 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:34:21.078005 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:34:21.081021 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:34:21.085404 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:34:21.089428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:34:21.090328 systemd-journald[1127]: Time spent on flushing to /var/log/journal/e1bfdd4ab48e48309a5edf3bf950195d is 71.459ms for 950 entries. Sep 12 17:34:21.090328 systemd-journald[1127]: System Journal (/var/log/journal/e1bfdd4ab48e48309a5edf3bf950195d) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:34:21.200625 systemd-journald[1127]: Received client request to flush runtime journal. Sep 12 17:34:21.097729 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:34:21.102998 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:34:21.108088 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:34:21.109514 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:34:21.157836 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:34:21.159719 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:34:21.168384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:34:21.173469 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:34:21.202673 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:34:21.210277 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:34:21.221509 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:34:21.269687 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:34:21.273299 kernel: loop0: detected capacity change from 0 to 140768 Sep 12 17:34:21.290440 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:34:21.295049 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:34:21.295939 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:34:21.305291 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:34:21.313077 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:34:21.340470 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:34:21.348296 kernel: loop1: detected capacity change from 0 to 221472 Sep 12 17:34:21.368751 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Sep 12 17:34:21.368772 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Sep 12 17:34:21.377237 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:34:21.387307 kernel: loop2: detected capacity change from 0 to 142488 Sep 12 17:34:21.499664 kernel: loop3: detected capacity change from 0 to 140768 Sep 12 17:34:21.519304 kernel: loop4: detected capacity change from 0 to 221472 Sep 12 17:34:21.526277 kernel: loop5: detected capacity change from 0 to 142488 Sep 12 17:34:21.538101 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:34:21.539050 (sd-merge)[1195]: Merged extensions into '/usr'. Sep 12 17:34:21.545837 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:34:21.545852 systemd[1]: Reloading... Sep 12 17:34:21.653637 zram_generator::config[1221]: No configuration found. Sep 12 17:34:21.727729 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:34:21.801252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:21.851103 systemd[1]: Reloading finished in 304 ms. Sep 12 17:34:21.960346 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:34:21.961948 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:34:21.980494 systemd[1]: Starting ensure-sysext.service... Sep 12 17:34:21.982596 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:34:21.990518 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:34:21.990539 systemd[1]: Reloading... Sep 12 17:34:22.024545 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:34:22.025572 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:34:22.028726 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:34:22.029238 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Sep 12 17:34:22.029432 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Sep 12 17:34:22.101910 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:34:22.101943 systemd-tmpfiles[1259]: Skipping /boot Sep 12 17:34:22.113288 zram_generator::config[1284]: No configuration found. Sep 12 17:34:22.127668 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:34:22.127821 systemd-tmpfiles[1259]: Skipping /boot Sep 12 17:34:22.215625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:22.275571 systemd[1]: Reloading finished in 284 ms. Sep 12 17:34:22.303381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:34:22.315709 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:34:22.327618 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:22.330842 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:34:22.334905 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:34:22.339214 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:34:22.344498 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:34:22.348931 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:34:22.352502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:22.352678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:22.354460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:22.362577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:22.366389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:22.367507 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:22.371493 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:34:22.372533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:22.373582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:22.373758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:22.375674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:22.375896 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:22.378968 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:34:22.379610 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Sep 12 17:34:22.384687 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:22.384934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:22.395995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:22.396362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:22.407775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:22.413307 augenrules[1355]: No rules Sep 12 17:34:22.416356 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:22.420769 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:22.423525 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:22.434541 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:34:22.435610 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:22.436670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:34:22.438479 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:34:22.442778 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:22.444406 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:34:22.446119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:22.446321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:22.448041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:22.448223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:22.450769 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:22.451039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:22.452948 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:34:22.454863 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:34:22.474626 systemd[1]: Finished ensure-sysext.service. Sep 12 17:34:22.479614 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:22.479772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:34:22.492598 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:34:22.496852 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:34:22.499433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:34:22.504421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:34:22.505601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:34:22.507540 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:34:22.522518 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:34:22.524916 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:34:22.524965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:34:22.525580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:34:22.525770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:34:22.527353 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:34:22.527560 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:34:22.530121 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:34:22.530595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:34:22.532540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:34:22.532733 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:34:22.543630 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:34:22.544854 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:34:22.544932 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:34:22.762298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:34:23.024296 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:34:23.031721 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:34:23.030963 systemd-resolved[1329]: Positive Trust Anchors: Sep 12 17:34:23.030993 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:34:23.031033 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:34:23.035574 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:34:23.035656 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1382) Sep 12 17:34:23.042553 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:34:23.042823 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:34:23.043034 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:34:23.046285 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:34:23.046390 systemd-resolved[1329]: Defaulting to hostname 'linux'. Sep 12 17:34:23.048342 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:34:23.048715 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:34:23.082045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:34:23.092587 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:34:23.131697 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:34:23.139442 systemd-networkd[1402]: lo: Link UP Sep 12 17:34:23.139455 systemd-networkd[1402]: lo: Gained carrier Sep 12 17:34:23.141143 systemd-networkd[1402]: Enumeration completed Sep 12 17:34:23.141573 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:23.141585 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:34:23.145360 systemd-networkd[1402]: eth0: Link UP Sep 12 17:34:23.145372 systemd-networkd[1402]: eth0: Gained carrier Sep 12 17:34:23.145385 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:34:23.164184 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:34:23.165404 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:34:23.166957 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:34:23.170102 systemd[1]: Reached target network.target - Network. Sep 12 17:34:23.171188 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:34:23.193480 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:34:23.197471 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.87/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:34:23.200449 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Sep 12 17:34:23.201773 kernel: kvm_amd: TSC scaling supported Sep 12 17:34:23.769295 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:34:23.769316 kernel: kvm_amd: Nested Paging enabled Sep 12 17:34:23.769328 kernel: kvm_amd: LBR virtualization supported Sep 12 17:34:23.769354 systemd-resolved[1329]: Clock change detected. Flushing caches. Sep 12 17:34:23.769538 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:34:23.769639 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2025-09-12 17:34:23.769271 UTC. Sep 12 17:34:23.770784 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:34:23.770812 kernel: kvm_amd: Virtual GIF supported Sep 12 17:34:23.791790 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:34:23.829275 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:34:23.846950 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:34:23.855874 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:34:23.922532 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:34:23.924851 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:34:23.925998 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:34:23.927166 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:34:23.928484 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:34:23.929927 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:34:23.931288 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:34:23.932516 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:34:23.933726 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:34:23.933770 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:34:23.934655 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:34:23.936516 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:34:23.939246 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:34:23.954576 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:34:23.957247 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:34:23.958981 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:34:23.960218 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:34:23.961178 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:34:23.962137 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:34:23.962165 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:34:23.963261 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:34:23.965438 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:34:23.969301 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:34:23.969866 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:34:23.976195 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:34:23.977801 jq[1437]: false Sep 12 17:34:23.977534 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:34:23.980930 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:34:23.984859 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:34:23.989810 dbus-daemon[1436]: [system] SELinux support is enabled Sep 12 17:34:23.989921 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:34:23.992948 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:34:24.000510 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:34:24.002620 extend-filesystems[1438]: Found loop3 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found loop4 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found loop5 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found sr0 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found vda Sep 12 17:34:24.004541 extend-filesystems[1438]: Found vda1 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found vda2 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found vda3 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found usr Sep 12 17:34:24.004541 extend-filesystems[1438]: Found vda4 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found vda6 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found vda7 Sep 12 17:34:24.004541 extend-filesystems[1438]: Found vda9 Sep 12 17:34:24.004541 extend-filesystems[1438]: Checking size of /dev/vda9 Sep 12 17:34:24.003327 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:34:24.003886 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:34:24.006923 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:34:24.008989 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:34:24.013099 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:34:24.018457 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:34:24.022251 extend-filesystems[1438]: Resized partition /dev/vda9 Sep 12 17:34:24.029313 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:34:24.029615 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:34:24.030103 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:34:24.030346 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:34:24.032988 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:34:24.036881 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:34:24.037111 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:34:24.039777 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:34:24.052859 update_engine[1451]: I20250912 17:34:24.052709 1451 main.cc:92] Flatcar Update Engine starting Sep 12 17:34:24.056027 update_engine[1451]: I20250912 17:34:24.054942 1451 update_check_scheduler.cc:74] Next update check in 2m36s Sep 12 17:34:24.056097 jq[1453]: true Sep 12 17:34:24.056169 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:34:24.066843 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1385) Sep 12 17:34:24.075456 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:34:24.083032 tar[1460]: linux-amd64/helm Sep 12 17:34:24.076956 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:34:24.076983 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:34:24.078294 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:34:24.078311 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:34:24.082915 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:34:24.092789 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:34:24.126989 jq[1472]: true Sep 12 17:34:24.128043 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:34:24.128069 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:34:24.128886 systemd-logind[1445]: New seat seat0. Sep 12 17:34:24.138941 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:34:24.147868 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:34:24.147868 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:34:24.147868 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:34:24.151949 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Sep 12 17:34:24.152114 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:34:24.152346 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:34:24.184102 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:34:24.190887 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:34:24.191289 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:34:24.194233 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:34:24.282057 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:34:24.314670 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:34:24.324052 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:34:24.334795 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:34:24.335049 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:34:24.342069 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:34:24.361593 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:34:24.463258 containerd[1461]: time="2025-09-12T17:34:24.461303229Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.499390192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502010284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502039780Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502056220Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502252298Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502268579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502336105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502348639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502568621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502583198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840462 containerd[1461]: time="2025-09-12T17:34:24.502595812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840692 tar[1460]: linux-amd64/LICENSE Sep 12 17:34:24.840692 tar[1460]: linux-amd64/README.md Sep 12 17:34:24.840562 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.502605611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.502703284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.503017683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.503147537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.503160281Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.503297548Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.503365686Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.511342352Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.511445796Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.511473087Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.511503444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.511524443Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:34:24.840986 containerd[1461]: time="2025-09-12T17:34:24.511674895Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512055018Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512303524Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512320416Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512333530Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512365510Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512380899Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512396689Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512439409Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512470327Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512506435Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512523997Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512539657Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512577638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841260 containerd[1461]: time="2025-09-12T17:34:24.512604619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512631529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512647208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512661205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512674540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512697032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512793072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512809793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512825242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512848857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512870247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512886828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512902187Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512935569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512950317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841525 containerd[1461]: time="2025-09-12T17:34:24.512961027Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:34:24.841920 containerd[1461]: time="2025-09-12T17:34:24.513016982Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:34:24.841920 containerd[1461]: time="2025-09-12T17:34:24.513035226Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:34:24.841920 containerd[1461]: time="2025-09-12T17:34:24.513046197Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:34:24.841920 containerd[1461]: time="2025-09-12T17:34:24.513058690Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:34:24.841920 containerd[1461]: time="2025-09-12T17:34:24.513068038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.841920 containerd[1461]: time="2025-09-12T17:34:24.513082024Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:34:24.841920 containerd[1461]: time="2025-09-12T17:34:24.513094397Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:34:24.841920 containerd[1461]: time="2025-09-12T17:34:24.513105989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.513435657Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.513493024Z" level=info msg="Connect containerd service" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.513535354Z" level=info msg="using legacy CRI server" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.513546164Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.513695304Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.514704456Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.514892438Z" level=info msg="Start subscribing containerd event" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.515028974Z" level=info msg="Start recovering state" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.515156734Z" level=info msg="Start event monitor" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.515192922Z" level=info msg="Start snapshots syncer" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.515332493Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.515351138Z" level=info msg="Start streaming server" Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.515375674Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.515466745Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:34:24.842073 containerd[1461]: time="2025-09-12T17:34:24.515541355Z" level=info msg="containerd successfully booted in 0.058691s" Sep 12 17:34:24.843540 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:34:24.844833 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:34:24.846138 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:34:24.873778 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:34:25.080038 systemd-networkd[1402]: eth0: Gained IPv6LL Sep 12 17:34:25.083638 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:34:25.085467 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:34:25.094170 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:34:25.096954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:25.099163 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:34:25.120103 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:34:25.120469 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:34:25.122203 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:34:25.124450 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:34:26.433627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:26.435515 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:34:26.437885 systemd[1]: Startup finished in 871ms (kernel) + 6.442s (initrd) + 5.773s (userspace) = 13.087s. Sep 12 17:34:26.462167 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:27.641578 kubelet[1549]: E0912 17:34:27.641489 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:27.646365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:27.646582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:27.646993 systemd[1]: kubelet.service: Consumed 2.402s CPU time. Sep 12 17:34:27.715976 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:34:27.717774 systemd[1]: Started sshd@0-10.0.0.87:22-10.0.0.1:39900.service - OpenSSH per-connection server daemon (10.0.0.1:39900). Sep 12 17:34:27.778897 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 39900 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:27.781680 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:27.801845 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:34:27.813228 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:34:27.816200 systemd-logind[1445]: New session 1 of user core. Sep 12 17:34:27.858081 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:34:27.879270 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:34:27.883401 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:34:28.041582 systemd[1566]: Queued start job for default target default.target. Sep 12 17:34:28.054303 systemd[1566]: Created slice app.slice - User Application Slice. Sep 12 17:34:28.054344 systemd[1566]: Reached target paths.target - Paths. Sep 12 17:34:28.054360 systemd[1566]: Reached target timers.target - Timers. Sep 12 17:34:28.056742 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:34:28.071051 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:34:28.071241 systemd[1566]: Reached target sockets.target - Sockets. Sep 12 17:34:28.071267 systemd[1566]: Reached target basic.target - Basic System. Sep 12 17:34:28.071320 systemd[1566]: Reached target default.target - Main User Target. Sep 12 17:34:28.071394 systemd[1566]: Startup finished in 178ms. Sep 12 17:34:28.072211 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:34:28.076121 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:34:28.158105 systemd[1]: Started sshd@1-10.0.0.87:22-10.0.0.1:39914.service - OpenSSH per-connection server daemon (10.0.0.1:39914). Sep 12 17:34:28.218122 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 39914 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:28.220500 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:28.227376 systemd-logind[1445]: New session 2 of user core. Sep 12 17:34:28.237038 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:34:28.299431 sshd[1577]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:28.308771 systemd[1]: sshd@1-10.0.0.87:22-10.0.0.1:39914.service: Deactivated successfully. Sep 12 17:34:28.310582 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:34:28.312062 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:34:28.313526 systemd[1]: Started sshd@2-10.0.0.87:22-10.0.0.1:39920.service - OpenSSH per-connection server daemon (10.0.0.1:39920). Sep 12 17:34:28.314422 systemd-logind[1445]: Removed session 2. Sep 12 17:34:28.357188 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 39920 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:28.359177 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:28.364807 systemd-logind[1445]: New session 3 of user core. Sep 12 17:34:28.380920 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:34:28.433132 sshd[1584]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:28.452194 systemd[1]: sshd@2-10.0.0.87:22-10.0.0.1:39920.service: Deactivated successfully. Sep 12 17:34:28.454328 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:34:28.456066 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:34:28.466029 systemd[1]: Started sshd@3-10.0.0.87:22-10.0.0.1:39930.service - OpenSSH per-connection server daemon (10.0.0.1:39930). Sep 12 17:34:28.467004 systemd-logind[1445]: Removed session 3. Sep 12 17:34:28.502013 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 39930 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:28.503815 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:28.508436 systemd-logind[1445]: New session 4 of user core. Sep 12 17:34:28.517881 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:34:28.573228 sshd[1591]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:28.592434 systemd[1]: sshd@3-10.0.0.87:22-10.0.0.1:39930.service: Deactivated successfully. Sep 12 17:34:28.595000 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:34:28.596820 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:34:28.606137 systemd[1]: Started sshd@4-10.0.0.87:22-10.0.0.1:39942.service - OpenSSH per-connection server daemon (10.0.0.1:39942). Sep 12 17:34:28.607124 systemd-logind[1445]: Removed session 4. Sep 12 17:34:28.642441 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 39942 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:28.644065 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:28.647946 systemd-logind[1445]: New session 5 of user core. Sep 12 17:34:28.664897 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:34:28.725545 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:34:28.725930 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:28.746690 sudo[1601]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:28.748947 sshd[1598]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:28.756503 systemd[1]: sshd@4-10.0.0.87:22-10.0.0.1:39942.service: Deactivated successfully. Sep 12 17:34:28.758249 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:34:28.760011 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:34:28.761389 systemd[1]: Started sshd@5-10.0.0.87:22-10.0.0.1:39948.service - OpenSSH per-connection server daemon (10.0.0.1:39948). Sep 12 17:34:28.762164 systemd-logind[1445]: Removed session 5. Sep 12 17:34:28.803231 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 39948 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:28.805357 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:28.810557 systemd-logind[1445]: New session 6 of user core. Sep 12 17:34:28.820029 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:34:28.877715 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:34:28.878188 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:28.882201 sudo[1610]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:28.888598 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:34:28.888968 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:28.906019 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:28.907838 auditctl[1613]: No rules Sep 12 17:34:28.909385 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:34:28.909682 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:28.911873 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:34:28.949514 augenrules[1631]: No rules Sep 12 17:34:28.950446 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:34:28.952047 sudo[1609]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:28.954110 sshd[1606]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:28.971829 systemd[1]: sshd@5-10.0.0.87:22-10.0.0.1:39948.service: Deactivated successfully. Sep 12 17:34:28.973752 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:34:28.975455 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:34:28.986063 systemd[1]: Started sshd@6-10.0.0.87:22-10.0.0.1:39956.service - OpenSSH per-connection server daemon (10.0.0.1:39956). Sep 12 17:34:28.987345 systemd-logind[1445]: Removed session 6. Sep 12 17:34:29.023622 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 39956 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:29.025155 sshd[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:29.028864 systemd-logind[1445]: New session 7 of user core. Sep 12 17:34:29.044898 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:34:29.098016 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:34:29.098373 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:29.865114 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:34:29.865353 (dockerd)[1661]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:34:30.881696 dockerd[1661]: time="2025-09-12T17:34:30.881597208Z" level=info msg="Starting up" Sep 12 17:34:31.536820 systemd[1]: var-lib-docker-metacopy\x2dcheck2293996840-merged.mount: Deactivated successfully. Sep 12 17:34:31.561036 dockerd[1661]: time="2025-09-12T17:34:31.560936028Z" level=info msg="Loading containers: start." Sep 12 17:34:31.691786 kernel: Initializing XFRM netlink socket Sep 12 17:34:31.778500 systemd-networkd[1402]: docker0: Link UP Sep 12 17:34:31.809017 dockerd[1661]: time="2025-09-12T17:34:31.808904619Z" level=info msg="Loading containers: done." Sep 12 17:34:31.830257 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3510274510-merged.mount: Deactivated successfully. Sep 12 17:34:31.830843 dockerd[1661]: time="2025-09-12T17:34:31.830797308Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:34:31.830908 dockerd[1661]: time="2025-09-12T17:34:31.830894991Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:34:31.831038 dockerd[1661]: time="2025-09-12T17:34:31.831015187Z" level=info msg="Daemon has completed initialization" Sep 12 17:34:31.874406 dockerd[1661]: time="2025-09-12T17:34:31.874321406Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:34:31.875017 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:34:32.833074 containerd[1461]: time="2025-09-12T17:34:32.833009105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:34:33.367551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687569235.mount: Deactivated successfully. Sep 12 17:34:35.205895 containerd[1461]: time="2025-09-12T17:34:35.205816458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:35.207441 containerd[1461]: time="2025-09-12T17:34:35.207383787Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:34:35.208701 containerd[1461]: time="2025-09-12T17:34:35.208645092Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:35.212795 containerd[1461]: time="2025-09-12T17:34:35.212750349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:35.213810 containerd[1461]: time="2025-09-12T17:34:35.213771013Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.380669896s" Sep 12 17:34:35.213872 containerd[1461]: time="2025-09-12T17:34:35.213816779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:34:35.214865 containerd[1461]: time="2025-09-12T17:34:35.214837723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:34:37.230918 containerd[1461]: time="2025-09-12T17:34:37.230859603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:37.233282 containerd[1461]: time="2025-09-12T17:34:37.233236881Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:34:37.236296 containerd[1461]: time="2025-09-12T17:34:37.236249079Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:37.240124 containerd[1461]: time="2025-09-12T17:34:37.240064162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:37.241465 containerd[1461]: time="2025-09-12T17:34:37.241434762Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 2.026557795s" Sep 12 17:34:37.241522 containerd[1461]: time="2025-09-12T17:34:37.241472082Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:34:37.241989 containerd[1461]: time="2025-09-12T17:34:37.241956730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:34:37.896893 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:34:37.917966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:38.435261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:38.444014 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:38.545699 kubelet[1884]: E0912 17:34:38.545606 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:38.551905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:38.552119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:39.441669 containerd[1461]: time="2025-09-12T17:34:39.441573822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:39.442458 containerd[1461]: time="2025-09-12T17:34:39.442392978Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:34:39.443795 containerd[1461]: time="2025-09-12T17:34:39.443737218Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:39.447807 containerd[1461]: time="2025-09-12T17:34:39.447739793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:39.448997 containerd[1461]: time="2025-09-12T17:34:39.448960262Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 2.206972583s" Sep 12 17:34:39.449109 containerd[1461]: time="2025-09-12T17:34:39.449001138Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:34:39.449659 containerd[1461]: time="2025-09-12T17:34:39.449622303Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:34:40.907700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777509300.mount: Deactivated successfully. Sep 12 17:34:41.901124 containerd[1461]: time="2025-09-12T17:34:41.901037396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:41.901916 containerd[1461]: time="2025-09-12T17:34:41.901837055Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:34:41.903086 containerd[1461]: time="2025-09-12T17:34:41.903047344Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:41.904965 containerd[1461]: time="2025-09-12T17:34:41.904926548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:41.905693 containerd[1461]: time="2025-09-12T17:34:41.905657328Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.4560003s" Sep 12 17:34:41.905726 containerd[1461]: time="2025-09-12T17:34:41.905691833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:34:41.906309 containerd[1461]: time="2025-09-12T17:34:41.906281558Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:34:42.433708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141539218.mount: Deactivated successfully. Sep 12 17:34:43.322879 containerd[1461]: time="2025-09-12T17:34:43.322815890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:43.323697 containerd[1461]: time="2025-09-12T17:34:43.323620689Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:34:43.325298 containerd[1461]: time="2025-09-12T17:34:43.325242239Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:43.328696 containerd[1461]: time="2025-09-12T17:34:43.328661330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:43.330101 containerd[1461]: time="2025-09-12T17:34:43.330062878Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.423748919s" Sep 12 17:34:43.330158 containerd[1461]: time="2025-09-12T17:34:43.330101130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:34:43.330908 containerd[1461]: time="2025-09-12T17:34:43.330697698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:34:43.877851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3993398595.mount: Deactivated successfully. Sep 12 17:34:43.884937 containerd[1461]: time="2025-09-12T17:34:43.884866885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:43.885848 containerd[1461]: time="2025-09-12T17:34:43.885784065Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:34:43.887334 containerd[1461]: time="2025-09-12T17:34:43.887301480Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:43.890430 containerd[1461]: time="2025-09-12T17:34:43.890379682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:43.891209 containerd[1461]: time="2025-09-12T17:34:43.891177217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 560.449763ms" Sep 12 17:34:43.891209 containerd[1461]: time="2025-09-12T17:34:43.891205520Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:34:43.891748 containerd[1461]: time="2025-09-12T17:34:43.891711970Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:34:44.439934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756334533.mount: Deactivated successfully. Sep 12 17:34:47.344425 containerd[1461]: time="2025-09-12T17:34:47.344337061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:47.345193 containerd[1461]: time="2025-09-12T17:34:47.345132943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:34:47.347773 containerd[1461]: time="2025-09-12T17:34:47.347116643Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:47.352996 containerd[1461]: time="2025-09-12T17:34:47.352939821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:47.354146 containerd[1461]: time="2025-09-12T17:34:47.354106198Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.462350637s" Sep 12 17:34:47.354146 containerd[1461]: time="2025-09-12T17:34:47.354143528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:34:48.803021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:34:48.817223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:49.001107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:49.008899 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:49.062152 kubelet[2041]: E0912 17:34:49.061972 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:49.067264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:49.067561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:49.549223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:49.560981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:49.586528 systemd[1]: Reloading requested from client PID 2056 ('systemctl') (unit session-7.scope)... Sep 12 17:34:49.586542 systemd[1]: Reloading... Sep 12 17:34:49.669800 zram_generator::config[2098]: No configuration found. Sep 12 17:34:50.271480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:50.367027 systemd[1]: Reloading finished in 780 ms. Sep 12 17:34:50.415536 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:34:50.415634 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:34:50.416146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:50.418856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:50.607808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:50.613076 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:34:50.716053 kubelet[2144]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:50.716053 kubelet[2144]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:34:50.716053 kubelet[2144]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:50.716517 kubelet[2144]: I0912 17:34:50.716123 2144 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:34:51.254411 kubelet[2144]: I0912 17:34:51.254357 2144 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:34:51.254411 kubelet[2144]: I0912 17:34:51.254398 2144 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:34:51.254683 kubelet[2144]: I0912 17:34:51.254661 2144 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:34:51.310357 kubelet[2144]: E0912 17:34:51.310301 2144 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.311827 kubelet[2144]: I0912 17:34:51.311776 2144 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:34:51.323672 kubelet[2144]: E0912 17:34:51.323635 2144 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:34:51.323672 kubelet[2144]: I0912 17:34:51.323670 2144 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:34:51.330838 kubelet[2144]: I0912 17:34:51.330794 2144 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:34:51.331541 kubelet[2144]: I0912 17:34:51.331508 2144 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:34:51.331716 kubelet[2144]: I0912 17:34:51.331663 2144 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:34:51.331938 kubelet[2144]: I0912 17:34:51.331705 2144 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:34:51.332072 kubelet[2144]: I0912 17:34:51.331947 2144 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:34:51.332072 kubelet[2144]: I0912 17:34:51.331958 2144 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:34:51.332116 kubelet[2144]: I0912 17:34:51.332091 2144 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:51.334602 kubelet[2144]: I0912 17:34:51.334566 2144 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:34:51.334602 kubelet[2144]: I0912 17:34:51.334593 2144 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:34:51.334690 kubelet[2144]: I0912 17:34:51.334630 2144 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:34:51.334690 kubelet[2144]: I0912 17:34:51.334650 2144 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:34:51.338131 kubelet[2144]: I0912 17:34:51.338093 2144 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:34:51.340478 kubelet[2144]: I0912 17:34:51.340454 2144 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:34:51.340543 kubelet[2144]: W0912 17:34:51.340522 2144 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:34:51.342432 kubelet[2144]: W0912 17:34:51.342383 2144 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Sep 12 17:34:51.342473 kubelet[2144]: E0912 17:34:51.342439 2144 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.342473 kubelet[2144]: W0912 17:34:51.342388 2144 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Sep 12 17:34:51.342473 kubelet[2144]: E0912 17:34:51.342466 2144 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.344775 kubelet[2144]: I0912 17:34:51.344055 2144 server.go:1274] "Started kubelet" Sep 12 17:34:51.344775 kubelet[2144]: I0912 17:34:51.344126 2144 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:34:51.344775 kubelet[2144]: I0912 17:34:51.344369 2144 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:34:51.344775 kubelet[2144]: I0912 17:34:51.344707 2144 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:34:51.345185 kubelet[2144]: I0912 17:34:51.345171 2144 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:34:51.347050 kubelet[2144]: I0912 17:34:51.346993 2144 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:34:51.347691 kubelet[2144]: I0912 17:34:51.347672 2144 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:34:51.348399 kubelet[2144]: E0912 17:34:51.348373 2144 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:51.348434 kubelet[2144]: I0912 17:34:51.348414 2144 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:34:51.348583 kubelet[2144]: I0912 17:34:51.348565 2144 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:34:51.348626 kubelet[2144]: I0912 17:34:51.348619 2144 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:34:51.350169 kubelet[2144]: I0912 17:34:51.350151 2144 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:34:51.350312 kubelet[2144]: I0912 17:34:51.350296 2144 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:34:51.351138 kubelet[2144]: E0912 17:34:51.350010 2144 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.87:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.87:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864997716cce225 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:34:51.344020005 +0000 UTC m=+0.726336902,LastTimestamp:2025-09-12 17:34:51.344020005 +0000 UTC m=+0.726336902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:34:51.351433 kubelet[2144]: W0912 17:34:51.351398 2144 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Sep 12 17:34:51.351461 kubelet[2144]: E0912 17:34:51.351440 2144 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.355433 kubelet[2144]: E0912 17:34:51.355027 2144 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:34:51.355433 kubelet[2144]: E0912 17:34:51.355385 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="200ms" Sep 12 17:34:51.356530 kubelet[2144]: I0912 17:34:51.356513 2144 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:34:51.376084 kubelet[2144]: I0912 17:34:51.376033 2144 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:34:51.377551 kubelet[2144]: I0912 17:34:51.377408 2144 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:34:51.377551 kubelet[2144]: I0912 17:34:51.377429 2144 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:34:51.377551 kubelet[2144]: I0912 17:34:51.377449 2144 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:34:51.377551 kubelet[2144]: E0912 17:34:51.377486 2144 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:34:51.377936 kubelet[2144]: W0912 17:34:51.377913 2144 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Sep 12 17:34:51.378028 kubelet[2144]: E0912 17:34:51.378009 2144 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:51.378272 kubelet[2144]: I0912 17:34:51.378233 2144 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:34:51.378272 kubelet[2144]: I0912 17:34:51.378265 2144 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:34:51.378334 kubelet[2144]: I0912 17:34:51.378314 2144 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:51.448771 kubelet[2144]: E0912 17:34:51.448696 2144 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:51.478068 kubelet[2144]: E0912 17:34:51.478025 2144 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:34:51.549411 kubelet[2144]: E0912 17:34:51.549264 2144 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:51.556898 kubelet[2144]: E0912 17:34:51.556847 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="400ms" Sep 12 17:34:51.649573 kubelet[2144]: E0912 17:34:51.649511 2144 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:51.678870 kubelet[2144]: E0912 17:34:51.678798 2144 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:34:51.716136 kubelet[2144]: I0912 17:34:51.716075 2144 policy_none.go:49] "None policy: Start" Sep 12 17:34:51.717058 kubelet[2144]: I0912 17:34:51.717035 2144 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:34:51.717118 kubelet[2144]: I0912 17:34:51.717079 2144 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:34:51.726286 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:34:51.741230 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:34:51.747019 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:34:51.750357 kubelet[2144]: E0912 17:34:51.750315 2144 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:34:51.757899 kubelet[2144]: I0912 17:34:51.757866 2144 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:34:51.758240 kubelet[2144]: I0912 17:34:51.758218 2144 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:34:51.758298 kubelet[2144]: I0912 17:34:51.758238 2144 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:34:51.758539 kubelet[2144]: I0912 17:34:51.758517 2144 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:34:51.760157 kubelet[2144]: E0912 17:34:51.760123 2144 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:34:51.860192 kubelet[2144]: I0912 17:34:51.860078 2144 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:51.860832 kubelet[2144]: E0912 17:34:51.860720 2144 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Sep 12 17:34:51.957715 kubelet[2144]: E0912 17:34:51.957659 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="800ms" Sep 12 17:34:52.062647 kubelet[2144]: I0912 17:34:52.062611 2144 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:52.063113 kubelet[2144]: E0912 17:34:52.063075 2144 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Sep 12 17:34:52.088790 systemd[1]: Created slice kubepods-burstable-pod4aabd0001a3abd7321f75995f63c7c8e.slice - libcontainer container kubepods-burstable-pod4aabd0001a3abd7321f75995f63c7c8e.slice. Sep 12 17:34:52.103643 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 17:34:52.108444 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 17:34:52.151900 kubelet[2144]: I0912 17:34:52.151784 2144 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:52.151900 kubelet[2144]: I0912 17:34:52.151837 2144 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:34:52.151900 kubelet[2144]: I0912 17:34:52.151860 2144 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:52.151900 kubelet[2144]: I0912 17:34:52.151873 2144 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:52.152060 kubelet[2144]: I0912 17:34:52.151901 2144 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4aabd0001a3abd7321f75995f63c7c8e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4aabd0001a3abd7321f75995f63c7c8e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:52.152060 kubelet[2144]: I0912 17:34:52.151947 2144 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:52.152060 kubelet[2144]: I0912 17:34:52.151990 2144 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:52.152060 kubelet[2144]: I0912 17:34:52.152019 2144 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4aabd0001a3abd7321f75995f63c7c8e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4aabd0001a3abd7321f75995f63c7c8e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:52.152060 kubelet[2144]: I0912 17:34:52.152038 2144 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4aabd0001a3abd7321f75995f63c7c8e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4aabd0001a3abd7321f75995f63c7c8e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:52.247493 kubelet[2144]: W0912 17:34:52.247384 2144 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Sep 12 17:34:52.247493 kubelet[2144]: E0912 17:34:52.247493 2144 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:52.402618 kubelet[2144]: E0912 17:34:52.402488 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:52.403354 containerd[1461]: time="2025-09-12T17:34:52.403298873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4aabd0001a3abd7321f75995f63c7c8e,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:52.406415 kubelet[2144]: E0912 17:34:52.406393 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:52.406797 containerd[1461]: time="2025-09-12T17:34:52.406751006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:52.412117 kubelet[2144]: E0912 17:34:52.412076 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:52.412526 containerd[1461]: time="2025-09-12T17:34:52.412492802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:52.465228 kubelet[2144]: I0912 17:34:52.465197 2144 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:52.465651 kubelet[2144]: E0912 17:34:52.465611 2144 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Sep 12 17:34:52.582493 kubelet[2144]: W0912 17:34:52.582410 2144 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Sep 12 17:34:52.582559 kubelet[2144]: E0912 17:34:52.582489 2144 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.87:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:52.628426 kubelet[2144]: E0912 17:34:52.628300 2144 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.87:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.87:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864997716cce225 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:34:51.344020005 +0000 UTC m=+0.726336902,LastTimestamp:2025-09-12 17:34:51.344020005 +0000 UTC m=+0.726336902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:34:52.756651 kubelet[2144]: W0912 17:34:52.756525 2144 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Sep 12 17:34:52.756651 kubelet[2144]: E0912 17:34:52.756597 2144 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.87:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:52.758037 kubelet[2144]: E0912 17:34:52.757997 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="1.6s" Sep 12 17:34:52.762520 kubelet[2144]: W0912 17:34:52.762487 2144 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Sep 12 17:34:52.762819 kubelet[2144]: E0912 17:34:52.762795 2144 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.87:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:53.267554 kubelet[2144]: I0912 17:34:53.267520 2144 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:53.267936 kubelet[2144]: E0912 17:34:53.267907 2144 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.87:6443/api/v1/nodes\": dial tcp 10.0.0.87:6443: connect: connection refused" node="localhost" Sep 12 17:34:53.360642 kubelet[2144]: E0912 17:34:53.360571 2144 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.87:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:53.610771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456532427.mount: Deactivated successfully. Sep 12 17:34:53.620806 containerd[1461]: time="2025-09-12T17:34:53.620731829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:53.621648 containerd[1461]: time="2025-09-12T17:34:53.621590499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:34:53.622737 containerd[1461]: time="2025-09-12T17:34:53.622699850Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:53.623665 containerd[1461]: time="2025-09-12T17:34:53.623638880Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:53.624681 containerd[1461]: time="2025-09-12T17:34:53.624615010Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:53.625702 containerd[1461]: time="2025-09-12T17:34:53.625635454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:34:53.626679 containerd[1461]: time="2025-09-12T17:34:53.626619509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:34:53.628613 containerd[1461]: time="2025-09-12T17:34:53.628576669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:53.630557 containerd[1461]: time="2025-09-12T17:34:53.630487942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.227098139s" Sep 12 17:34:53.631318 containerd[1461]: time="2025-09-12T17:34:53.631292140Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.224445706s" Sep 12 17:34:53.633066 containerd[1461]: time="2025-09-12T17:34:53.633006835Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.220448871s" Sep 12 17:34:54.000873 containerd[1461]: time="2025-09-12T17:34:54.000484896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:54.001069 containerd[1461]: time="2025-09-12T17:34:54.000657801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:54.001069 containerd[1461]: time="2025-09-12T17:34:54.000696413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:54.001069 containerd[1461]: time="2025-09-12T17:34:54.000877312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:54.007226 containerd[1461]: time="2025-09-12T17:34:54.006933768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:54.007226 containerd[1461]: time="2025-09-12T17:34:54.007015782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:54.007226 containerd[1461]: time="2025-09-12T17:34:54.007032804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:54.007226 containerd[1461]: time="2025-09-12T17:34:54.007133683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:54.071048 systemd[1]: Started cri-containerd-12a4cc25bdb504bb401fa53c43ab92850f888f9b406910caa92abeb50844a9b3.scope - libcontainer container 12a4cc25bdb504bb401fa53c43ab92850f888f9b406910caa92abeb50844a9b3. Sep 12 17:34:54.075071 containerd[1461]: time="2025-09-12T17:34:54.074673500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:54.075071 containerd[1461]: time="2025-09-12T17:34:54.074798164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:54.075071 containerd[1461]: time="2025-09-12T17:34:54.074814114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:54.075512 containerd[1461]: time="2025-09-12T17:34:54.074892220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:54.075831 systemd[1]: Started cri-containerd-e298e5191a92480ca468ae6d95835a2846e42ded703c17add0984fdf7d964768.scope - libcontainer container e298e5191a92480ca468ae6d95835a2846e42ded703c17add0984fdf7d964768. Sep 12 17:34:54.259527 kubelet[2144]: W0912 17:34:54.259388 2144 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.87:6443: connect: connection refused Sep 12 17:34:54.259527 kubelet[2144]: E0912 17:34:54.259447 2144 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.87:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.87:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:54.322122 systemd[1]: Started cri-containerd-d20fae2bf57db60ed79e614b408c3fc52db59cd811fff8d995014b77719b0925.scope - libcontainer container d20fae2bf57db60ed79e614b408c3fc52db59cd811fff8d995014b77719b0925. Sep 12 17:34:54.340919 containerd[1461]: time="2025-09-12T17:34:54.340077272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"12a4cc25bdb504bb401fa53c43ab92850f888f9b406910caa92abeb50844a9b3\"" Sep 12 17:34:54.342111 kubelet[2144]: E0912 17:34:54.342047 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:54.345071 containerd[1461]: time="2025-09-12T17:34:54.345032503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e298e5191a92480ca468ae6d95835a2846e42ded703c17add0984fdf7d964768\"" Sep 12 17:34:54.345163 containerd[1461]: time="2025-09-12T17:34:54.345065054Z" level=info msg="CreateContainer within sandbox \"12a4cc25bdb504bb401fa53c43ab92850f888f9b406910caa92abeb50844a9b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:34:54.345867 kubelet[2144]: E0912 17:34:54.345832 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:54.347554 containerd[1461]: time="2025-09-12T17:34:54.347524926Z" level=info msg="CreateContainer within sandbox \"e298e5191a92480ca468ae6d95835a2846e42ded703c17add0984fdf7d964768\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:34:54.359425 kubelet[2144]: E0912 17:34:54.359367 2144 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.87:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.87:6443: connect: connection refused" interval="3.2s" Sep 12 17:34:54.370448 containerd[1461]: time="2025-09-12T17:34:54.370384117Z" level=info msg="CreateContainer within sandbox \"12a4cc25bdb504bb401fa53c43ab92850f888f9b406910caa92abeb50844a9b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0445e6a7e88f47b70a6ba2a6ffc114b2566cfb6e6b2101a9960b381f4feb87aa\"" Sep 12 17:34:54.372534 containerd[1461]: time="2025-09-12T17:34:54.371242176Z" level=info msg="StartContainer for \"0445e6a7e88f47b70a6ba2a6ffc114b2566cfb6e6b2101a9960b381f4feb87aa\"" Sep 12 17:34:54.374962 containerd[1461]: time="2025-09-12T17:34:54.374923348Z" level=info msg="CreateContainer within sandbox \"e298e5191a92480ca468ae6d95835a2846e42ded703c17add0984fdf7d964768\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e9457cd7efaceaa27c5de4ef27e31db2400fe0f01469d057ecb591410e560857\"" Sep 12 17:34:54.375503 containerd[1461]: time="2025-09-12T17:34:54.375478469Z" level=info msg="StartContainer for \"e9457cd7efaceaa27c5de4ef27e31db2400fe0f01469d057ecb591410e560857\"" Sep 12 17:34:54.384593 containerd[1461]: time="2025-09-12T17:34:54.384543797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4aabd0001a3abd7321f75995f63c7c8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d20fae2bf57db60ed79e614b408c3fc52db59cd811fff8d995014b77719b0925\"" Sep 12 17:34:54.385148 kubelet[2144]: E0912 17:34:54.385120 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:54.387518 containerd[1461]: time="2025-09-12T17:34:54.387447381Z" level=info msg="CreateContainer within sandbox \"d20fae2bf57db60ed79e614b408c3fc52db59cd811fff8d995014b77719b0925\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:34:54.452133 systemd[1]: Started cri-containerd-0445e6a7e88f47b70a6ba2a6ffc114b2566cfb6e6b2101a9960b381f4feb87aa.scope - libcontainer container 0445e6a7e88f47b70a6ba2a6ffc114b2566cfb6e6b2101a9960b381f4feb87aa. Sep 12 17:34:54.455648 systemd[1]: Started cri-containerd-e9457cd7efaceaa27c5de4ef27e31db2400fe0f01469d057ecb591410e560857.scope - libcontainer container e9457cd7efaceaa27c5de4ef27e31db2400fe0f01469d057ecb591410e560857. Sep 12 17:34:54.460799 containerd[1461]: time="2025-09-12T17:34:54.458738652Z" level=info msg="CreateContainer within sandbox \"d20fae2bf57db60ed79e614b408c3fc52db59cd811fff8d995014b77719b0925\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5eee2aa92b0376baebe075b797b1830d7c380f03d4f3870a038cebc87fb661c5\"" Sep 12 17:34:54.460799 containerd[1461]: time="2025-09-12T17:34:54.460172741Z" level=info msg="StartContainer for \"5eee2aa92b0376baebe075b797b1830d7c380f03d4f3870a038cebc87fb661c5\"" Sep 12 17:34:54.501933 systemd[1]: Started cri-containerd-5eee2aa92b0376baebe075b797b1830d7c380f03d4f3870a038cebc87fb661c5.scope - libcontainer container 5eee2aa92b0376baebe075b797b1830d7c380f03d4f3870a038cebc87fb661c5. Sep 12 17:34:54.687320 containerd[1461]: time="2025-09-12T17:34:54.687152578Z" level=info msg="StartContainer for \"e9457cd7efaceaa27c5de4ef27e31db2400fe0f01469d057ecb591410e560857\" returns successfully" Sep 12 17:34:54.687320 containerd[1461]: time="2025-09-12T17:34:54.687215135Z" level=info msg="StartContainer for \"0445e6a7e88f47b70a6ba2a6ffc114b2566cfb6e6b2101a9960b381f4feb87aa\" returns successfully" Sep 12 17:34:54.687838 containerd[1461]: time="2025-09-12T17:34:54.687330973Z" level=info msg="StartContainer for \"5eee2aa92b0376baebe075b797b1830d7c380f03d4f3870a038cebc87fb661c5\" returns successfully" Sep 12 17:34:54.871793 kubelet[2144]: I0912 17:34:54.870066 2144 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:55.430263 kubelet[2144]: E0912 17:34:55.430208 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:55.431793 kubelet[2144]: E0912 17:34:55.431489 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:55.433550 kubelet[2144]: E0912 17:34:55.433519 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:56.339790 kubelet[2144]: I0912 17:34:56.337316 2144 apiserver.go:52] "Watching apiserver" Sep 12 17:34:56.349509 kubelet[2144]: I0912 17:34:56.349457 2144 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:34:56.429469 kubelet[2144]: I0912 17:34:56.429407 2144 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:34:56.439728 kubelet[2144]: E0912 17:34:56.439678 2144 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:56.441144 kubelet[2144]: E0912 17:34:56.440313 2144 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:56.441144 kubelet[2144]: E0912 17:34:56.440422 2144 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:34:56.441144 kubelet[2144]: E0912 17:34:56.441049 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:56.441144 kubelet[2144]: E0912 17:34:56.441064 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:56.441144 kubelet[2144]: E0912 17:34:56.440923 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:58.559442 systemd[1]: Reloading requested from client PID 2421 ('systemctl') (unit session-7.scope)... Sep 12 17:34:58.559458 systemd[1]: Reloading... Sep 12 17:34:58.639790 zram_generator::config[2463]: No configuration found. Sep 12 17:34:58.743940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:58.790002 kubelet[2144]: E0912 17:34:58.789967 2144 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:58.841711 systemd[1]: Reloading finished in 281 ms. Sep 12 17:34:58.887509 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:58.904692 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:34:58.905078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:58.905142 systemd[1]: kubelet.service: Consumed 1.327s CPU time, 133.0M memory peak, 0B memory swap peak. Sep 12 17:34:58.911993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:59.078469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:59.093228 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:34:59.172895 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:59.172895 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:34:59.172895 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:59.173356 kubelet[2505]: I0912 17:34:59.172956 2505 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:34:59.178920 kubelet[2505]: I0912 17:34:59.178888 2505 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:34:59.178920 kubelet[2505]: I0912 17:34:59.178910 2505 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:34:59.179147 kubelet[2505]: I0912 17:34:59.179122 2505 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:34:59.180264 kubelet[2505]: I0912 17:34:59.180247 2505 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:34:59.181959 kubelet[2505]: I0912 17:34:59.181934 2505 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:34:59.191585 kubelet[2505]: E0912 17:34:59.190051 2505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:34:59.191585 kubelet[2505]: I0912 17:34:59.190096 2505 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:34:59.196973 kubelet[2505]: I0912 17:34:59.196515 2505 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:34:59.196973 kubelet[2505]: I0912 17:34:59.196788 2505 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:34:59.196973 kubelet[2505]: I0912 17:34:59.196941 2505 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:34:59.197150 kubelet[2505]: I0912 17:34:59.196976 2505 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:34:59.197307 kubelet[2505]: I0912 17:34:59.197159 2505 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:34:59.197307 kubelet[2505]: I0912 17:34:59.197169 2505 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:34:59.197307 kubelet[2505]: I0912 17:34:59.197206 2505 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:59.197381 kubelet[2505]: I0912 17:34:59.197352 2505 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:34:59.197381 kubelet[2505]: I0912 17:34:59.197365 2505 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:34:59.197430 kubelet[2505]: I0912 17:34:59.197400 2505 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:34:59.197430 kubelet[2505]: I0912 17:34:59.197411 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:34:59.200556 kubelet[2505]: I0912 17:34:59.200503 2505 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:34:59.202827 kubelet[2505]: I0912 17:34:59.200942 2505 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:34:59.202827 kubelet[2505]: I0912 17:34:59.201605 2505 server.go:1274] "Started kubelet" Sep 12 17:34:59.255250 kubelet[2505]: I0912 17:34:59.254825 2505 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:34:59.259131 kubelet[2505]: I0912 17:34:59.259106 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:34:59.265816 kubelet[2505]: I0912 17:34:59.265790 2505 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:34:59.266374 kubelet[2505]: I0912 17:34:59.259638 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:34:59.266374 kubelet[2505]: I0912 17:34:59.265983 2505 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:34:59.266374 kubelet[2505]: I0912 17:34:59.262651 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:34:59.266374 kubelet[2505]: I0912 17:34:59.266335 2505 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:34:59.267265 kubelet[2505]: I0912 17:34:59.267209 2505 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:34:59.268032 kubelet[2505]: I0912 17:34:59.267804 2505 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:34:59.268032 kubelet[2505]: I0912 17:34:59.259978 2505 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:34:59.269583 kubelet[2505]: E0912 17:34:59.262542 2505 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:34:59.269705 kubelet[2505]: I0912 17:34:59.269684 2505 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:34:59.270773 kubelet[2505]: I0912 17:34:59.270727 2505 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:34:59.280853 kubelet[2505]: I0912 17:34:59.280815 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:34:59.282832 kubelet[2505]: I0912 17:34:59.282801 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:34:59.282832 kubelet[2505]: I0912 17:34:59.282824 2505 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:34:59.282924 kubelet[2505]: I0912 17:34:59.282842 2505 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:34:59.282924 kubelet[2505]: E0912 17:34:59.282879 2505 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:34:59.303596 kubelet[2505]: I0912 17:34:59.303571 2505 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:34:59.303596 kubelet[2505]: I0912 17:34:59.303588 2505 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:34:59.303709 kubelet[2505]: I0912 17:34:59.303606 2505 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:59.303815 kubelet[2505]: I0912 17:34:59.303798 2505 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:34:59.303844 kubelet[2505]: I0912 17:34:59.303811 2505 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:34:59.303844 kubelet[2505]: I0912 17:34:59.303833 2505 policy_none.go:49] "None policy: Start" Sep 12 17:34:59.304368 kubelet[2505]: I0912 17:34:59.304346 2505 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:34:59.304399 kubelet[2505]: I0912 17:34:59.304372 2505 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:34:59.304513 kubelet[2505]: I0912 17:34:59.304498 2505 state_mem.go:75] "Updated machine memory state" Sep 12 17:34:59.308951 kubelet[2505]: I0912 17:34:59.308922 2505 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:34:59.309123 kubelet[2505]: I0912 17:34:59.309084 2505 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:34:59.309123 kubelet[2505]: I0912 17:34:59.309098 2505 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:34:59.309441 kubelet[2505]: I0912 17:34:59.309354 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:34:59.390000 kubelet[2505]: E0912 17:34:59.389865 2505 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:59.415226 kubelet[2505]: I0912 17:34:59.415199 2505 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 17:34:59.420052 kubelet[2505]: I0912 17:34:59.420005 2505 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 17:34:59.420178 kubelet[2505]: I0912 17:34:59.420081 2505 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 17:34:59.473678 kubelet[2505]: I0912 17:34:59.473622 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4aabd0001a3abd7321f75995f63c7c8e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4aabd0001a3abd7321f75995f63c7c8e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:59.473678 kubelet[2505]: I0912 17:34:59.473672 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4aabd0001a3abd7321f75995f63c7c8e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4aabd0001a3abd7321f75995f63c7c8e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:59.473788 kubelet[2505]: I0912 17:34:59.473702 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4aabd0001a3abd7321f75995f63c7c8e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4aabd0001a3abd7321f75995f63c7c8e\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:34:59.473788 kubelet[2505]: I0912 17:34:59.473719 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:59.473788 kubelet[2505]: I0912 17:34:59.473783 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:59.473890 kubelet[2505]: I0912 17:34:59.473809 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:59.473890 kubelet[2505]: I0912 17:34:59.473825 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:59.473890 kubelet[2505]: I0912 17:34:59.473840 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:34:59.473890 kubelet[2505]: I0912 17:34:59.473855 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:34:59.688663 kubelet[2505]: E0912 17:34:59.688527 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:59.690443 kubelet[2505]: E0912 17:34:59.689889 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:59.690443 kubelet[2505]: E0912 17:34:59.690189 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:00.198530 kubelet[2505]: I0912 17:35:00.198481 2505 apiserver.go:52] "Watching apiserver" Sep 12 17:35:00.266410 kubelet[2505]: I0912 17:35:00.266359 2505 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:35:00.295134 kubelet[2505]: E0912 17:35:00.293042 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:00.295134 kubelet[2505]: E0912 17:35:00.293335 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:00.418448 kubelet[2505]: I0912 17:35:00.417709 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.417698024 podStartE2EDuration="1.417698024s" podCreationTimestamp="2025-09-12 17:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:00.417477201 +0000 UTC m=+1.316856633" watchObservedRunningTime="2025-09-12 17:35:00.417698024 +0000 UTC m=+1.317077456" Sep 12 17:35:00.418448 kubelet[2505]: E0912 17:35:00.417339 2505 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:35:00.418448 kubelet[2505]: E0912 17:35:00.417954 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:00.428709 kubelet[2505]: I0912 17:35:00.428589 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.428531211 podStartE2EDuration="2.428531211s" podCreationTimestamp="2025-09-12 17:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:00.427866856 +0000 UTC m=+1.327246278" watchObservedRunningTime="2025-09-12 17:35:00.428531211 +0000 UTC m=+1.327910643" Sep 12 17:35:00.440202 kubelet[2505]: I0912 17:35:00.440144 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.440125987 podStartE2EDuration="1.440125987s" podCreationTimestamp="2025-09-12 17:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:00.439909302 +0000 UTC m=+1.339288744" watchObservedRunningTime="2025-09-12 17:35:00.440125987 +0000 UTC m=+1.339505419" Sep 12 17:35:01.294822 kubelet[2505]: E0912 17:35:01.294777 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:03.755211 kubelet[2505]: I0912 17:35:03.755177 2505 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:35:03.755884 kubelet[2505]: I0912 17:35:03.755673 2505 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:35:03.755920 containerd[1461]: time="2025-09-12T17:35:03.755506129Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:35:04.512767 systemd[1]: Created slice kubepods-besteffort-podc76cd7bb_66de_4d82_bb54_33c1af2b7fa3.slice - libcontainer container kubepods-besteffort-podc76cd7bb_66de_4d82_bb54_33c1af2b7fa3.slice. Sep 12 17:35:04.606728 kubelet[2505]: I0912 17:35:04.606681 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55jl5\" (UniqueName: \"kubernetes.io/projected/c76cd7bb-66de-4d82-bb54-33c1af2b7fa3-kube-api-access-55jl5\") pod \"kube-proxy-8fdrp\" (UID: \"c76cd7bb-66de-4d82-bb54-33c1af2b7fa3\") " pod="kube-system/kube-proxy-8fdrp" Sep 12 17:35:04.606728 kubelet[2505]: I0912 17:35:04.606721 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c76cd7bb-66de-4d82-bb54-33c1af2b7fa3-kube-proxy\") pod \"kube-proxy-8fdrp\" (UID: \"c76cd7bb-66de-4d82-bb54-33c1af2b7fa3\") " pod="kube-system/kube-proxy-8fdrp" Sep 12 17:35:04.606728 kubelet[2505]: I0912 17:35:04.606749 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c76cd7bb-66de-4d82-bb54-33c1af2b7fa3-xtables-lock\") pod \"kube-proxy-8fdrp\" (UID: \"c76cd7bb-66de-4d82-bb54-33c1af2b7fa3\") " pod="kube-system/kube-proxy-8fdrp" Sep 12 17:35:04.606944 kubelet[2505]: I0912 17:35:04.606776 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76cd7bb-66de-4d82-bb54-33c1af2b7fa3-lib-modules\") pod \"kube-proxy-8fdrp\" (UID: \"c76cd7bb-66de-4d82-bb54-33c1af2b7fa3\") " pod="kube-system/kube-proxy-8fdrp" Sep 12 17:35:04.826276 kubelet[2505]: E0912 17:35:04.826129 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:04.826888 containerd[1461]: time="2025-09-12T17:35:04.826846016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8fdrp,Uid:c76cd7bb-66de-4d82-bb54-33c1af2b7fa3,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:04.835766 systemd[1]: Created slice kubepods-besteffort-podea6fbbd3_beea_40e9_891c_7d6d29e8d354.slice - libcontainer container kubepods-besteffort-podea6fbbd3_beea_40e9_891c_7d6d29e8d354.slice. Sep 12 17:35:04.871805 containerd[1461]: time="2025-09-12T17:35:04.871648712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:04.871805 containerd[1461]: time="2025-09-12T17:35:04.871728836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:04.871805 containerd[1461]: time="2025-09-12T17:35:04.871750277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:04.871955 containerd[1461]: time="2025-09-12T17:35:04.871891716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:04.904885 systemd[1]: Started cri-containerd-19e58b3cfebeeb7573423ca0a7a2a0b51ce4c924110d60ccaa5c3a8d0a0d2d07.scope - libcontainer container 19e58b3cfebeeb7573423ca0a7a2a0b51ce4c924110d60ccaa5c3a8d0a0d2d07. Sep 12 17:35:04.909791 kubelet[2505]: I0912 17:35:04.909697 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea6fbbd3-beea-40e9-891c-7d6d29e8d354-var-lib-calico\") pod \"tigera-operator-58fc44c59b-pvwc7\" (UID: \"ea6fbbd3-beea-40e9-891c-7d6d29e8d354\") " pod="tigera-operator/tigera-operator-58fc44c59b-pvwc7" Sep 12 17:35:04.909791 kubelet[2505]: I0912 17:35:04.909730 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5br5p\" (UniqueName: \"kubernetes.io/projected/ea6fbbd3-beea-40e9-891c-7d6d29e8d354-kube-api-access-5br5p\") pod \"tigera-operator-58fc44c59b-pvwc7\" (UID: \"ea6fbbd3-beea-40e9-891c-7d6d29e8d354\") " pod="tigera-operator/tigera-operator-58fc44c59b-pvwc7" Sep 12 17:35:04.931715 containerd[1461]: time="2025-09-12T17:35:04.931680085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8fdrp,Uid:c76cd7bb-66de-4d82-bb54-33c1af2b7fa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"19e58b3cfebeeb7573423ca0a7a2a0b51ce4c924110d60ccaa5c3a8d0a0d2d07\"" Sep 12 17:35:04.932597 kubelet[2505]: E0912 17:35:04.932525 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:04.934432 containerd[1461]: time="2025-09-12T17:35:04.934398292Z" level=info msg="CreateContainer within sandbox \"19e58b3cfebeeb7573423ca0a7a2a0b51ce4c924110d60ccaa5c3a8d0a0d2d07\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:35:05.139695 containerd[1461]: time="2025-09-12T17:35:05.139649800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-pvwc7,Uid:ea6fbbd3-beea-40e9-891c-7d6d29e8d354,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:35:05.722663 containerd[1461]: time="2025-09-12T17:35:05.722597714Z" level=info msg="CreateContainer within sandbox \"19e58b3cfebeeb7573423ca0a7a2a0b51ce4c924110d60ccaa5c3a8d0a0d2d07\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc17b5f17d7beedc757ac6f511bfe8968f08e31219e268799e207e5cc1374dac\"" Sep 12 17:35:05.723491 containerd[1461]: time="2025-09-12T17:35:05.723444909Z" level=info msg="StartContainer for \"bc17b5f17d7beedc757ac6f511bfe8968f08e31219e268799e207e5cc1374dac\"" Sep 12 17:35:05.742627 containerd[1461]: time="2025-09-12T17:35:05.741000971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:05.742627 containerd[1461]: time="2025-09-12T17:35:05.742576273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:05.742627 containerd[1461]: time="2025-09-12T17:35:05.742597613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:05.742897 containerd[1461]: time="2025-09-12T17:35:05.742727901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:05.766053 systemd[1]: Started cri-containerd-bc17b5f17d7beedc757ac6f511bfe8968f08e31219e268799e207e5cc1374dac.scope - libcontainer container bc17b5f17d7beedc757ac6f511bfe8968f08e31219e268799e207e5cc1374dac. Sep 12 17:35:05.769608 systemd[1]: Started cri-containerd-6c4f09d2f0a53a22b7c6258aa2dd651843f6ea769197a356e7d6fca4ba1d9fda.scope - libcontainer container 6c4f09d2f0a53a22b7c6258aa2dd651843f6ea769197a356e7d6fca4ba1d9fda. Sep 12 17:35:05.814337 containerd[1461]: time="2025-09-12T17:35:05.814279759Z" level=info msg="StartContainer for \"bc17b5f17d7beedc757ac6f511bfe8968f08e31219e268799e207e5cc1374dac\" returns successfully" Sep 12 17:35:05.822564 containerd[1461]: time="2025-09-12T17:35:05.822204043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-pvwc7,Uid:ea6fbbd3-beea-40e9-891c-7d6d29e8d354,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6c4f09d2f0a53a22b7c6258aa2dd651843f6ea769197a356e7d6fca4ba1d9fda\"" Sep 12 17:35:05.825542 containerd[1461]: time="2025-09-12T17:35:05.825277690Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:35:06.304663 kubelet[2505]: E0912 17:35:06.304627 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:06.318186 kubelet[2505]: I0912 17:35:06.318132 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8fdrp" podStartSLOduration=2.318116543 podStartE2EDuration="2.318116543s" podCreationTimestamp="2025-09-12 17:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:06.316799425 +0000 UTC m=+7.216178857" watchObservedRunningTime="2025-09-12 17:35:06.318116543 +0000 UTC m=+7.217495975" Sep 12 17:35:06.469626 kubelet[2505]: E0912 17:35:06.469587 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:06.587525 kubelet[2505]: E0912 17:35:06.587369 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:07.307725 kubelet[2505]: E0912 17:35:07.306662 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:07.307725 kubelet[2505]: E0912 17:35:07.306843 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:07.307725 kubelet[2505]: E0912 17:35:07.307010 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:07.393288 kubelet[2505]: E0912 17:35:07.393234 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:08.310435 kubelet[2505]: E0912 17:35:08.310231 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:09.354989 update_engine[1451]: I20250912 17:35:09.354871 1451 update_attempter.cc:509] Updating boot flags... Sep 12 17:35:09.408813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2807) Sep 12 17:35:09.449848 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2805) Sep 12 17:35:09.511806 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2805) Sep 12 17:35:10.754782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933614474.mount: Deactivated successfully. Sep 12 17:35:12.007479 containerd[1461]: time="2025-09-12T17:35:12.007425950Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:12.093083 containerd[1461]: time="2025-09-12T17:35:12.093018725Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:35:12.155808 containerd[1461]: time="2025-09-12T17:35:12.155751025Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:12.172533 containerd[1461]: time="2025-09-12T17:35:12.172495561Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:12.173250 containerd[1461]: time="2025-09-12T17:35:12.173216779Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 6.347895004s" Sep 12 17:35:12.173250 containerd[1461]: time="2025-09-12T17:35:12.173249861Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:35:12.175161 containerd[1461]: time="2025-09-12T17:35:12.175113352Z" level=info msg="CreateContainer within sandbox \"6c4f09d2f0a53a22b7c6258aa2dd651843f6ea769197a356e7d6fca4ba1d9fda\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:35:12.198739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003809099.mount: Deactivated successfully. Sep 12 17:35:12.199906 containerd[1461]: time="2025-09-12T17:35:12.199857084Z" level=info msg="CreateContainer within sandbox \"6c4f09d2f0a53a22b7c6258aa2dd651843f6ea769197a356e7d6fca4ba1d9fda\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b\"" Sep 12 17:35:12.200550 containerd[1461]: time="2025-09-12T17:35:12.200525470Z" level=info msg="StartContainer for \"811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b\"" Sep 12 17:35:12.237932 systemd[1]: Started cri-containerd-811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b.scope - libcontainer container 811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b. Sep 12 17:35:12.269322 containerd[1461]: time="2025-09-12T17:35:12.269186995Z" level=info msg="StartContainer for \"811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b\" returns successfully" Sep 12 17:35:14.703018 systemd[1]: cri-containerd-811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b.scope: Deactivated successfully. Sep 12 17:35:14.728465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b-rootfs.mount: Deactivated successfully. Sep 12 17:35:15.392460 containerd[1461]: time="2025-09-12T17:35:15.392367769Z" level=info msg="shim disconnected" id=811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b namespace=k8s.io Sep 12 17:35:15.392460 containerd[1461]: time="2025-09-12T17:35:15.392454463Z" level=warning msg="cleaning up after shim disconnected" id=811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b namespace=k8s.io Sep 12 17:35:15.392460 containerd[1461]: time="2025-09-12T17:35:15.392466696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:35:16.328396 kubelet[2505]: I0912 17:35:16.328361 2505 scope.go:117] "RemoveContainer" containerID="811c41cf7afd4b8f4240d1ecae7cafb96f097d17747f77e6f38ed288ccfbd98b" Sep 12 17:35:16.329913 containerd[1461]: time="2025-09-12T17:35:16.329857867Z" level=info msg="CreateContainer within sandbox \"6c4f09d2f0a53a22b7c6258aa2dd651843f6ea769197a356e7d6fca4ba1d9fda\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:35:16.345406 containerd[1461]: time="2025-09-12T17:35:16.345343639Z" level=info msg="CreateContainer within sandbox \"6c4f09d2f0a53a22b7c6258aa2dd651843f6ea769197a356e7d6fca4ba1d9fda\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0cb9973b370c3683abef57c463315548296fd96e0e990c9de2ccf72f8dc40988\"" Sep 12 17:35:16.346130 containerd[1461]: time="2025-09-12T17:35:16.346094508Z" level=info msg="StartContainer for \"0cb9973b370c3683abef57c463315548296fd96e0e990c9de2ccf72f8dc40988\"" Sep 12 17:35:16.384089 systemd[1]: Started cri-containerd-0cb9973b370c3683abef57c463315548296fd96e0e990c9de2ccf72f8dc40988.scope - libcontainer container 0cb9973b370c3683abef57c463315548296fd96e0e990c9de2ccf72f8dc40988. Sep 12 17:35:16.412176 containerd[1461]: time="2025-09-12T17:35:16.412117667Z" level=info msg="StartContainer for \"0cb9973b370c3683abef57c463315548296fd96e0e990c9de2ccf72f8dc40988\" returns successfully" Sep 12 17:35:17.339822 kubelet[2505]: I0912 17:35:17.339593 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-pvwc7" podStartSLOduration=6.990153247 podStartE2EDuration="13.339574921s" podCreationTimestamp="2025-09-12 17:35:04 +0000 UTC" firstStartedPulling="2025-09-12 17:35:05.824591443 +0000 UTC m=+6.723970875" lastFinishedPulling="2025-09-12 17:35:12.174013107 +0000 UTC m=+13.073392549" observedRunningTime="2025-09-12 17:35:12.330567635 +0000 UTC m=+13.229947067" watchObservedRunningTime="2025-09-12 17:35:17.339574921 +0000 UTC m=+18.238954354" Sep 12 17:35:18.114680 sudo[1643]: pam_unix(sudo:session): session closed for user root Sep 12 17:35:18.117002 sshd[1639]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:18.120659 systemd[1]: sshd@6-10.0.0.87:22-10.0.0.1:39956.service: Deactivated successfully. Sep 12 17:35:18.123050 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:35:18.123703 systemd[1]: session-7.scope: Consumed 5.148s CPU time, 157.9M memory peak, 0B memory swap peak. Sep 12 17:35:18.125670 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:35:18.127158 systemd-logind[1445]: Removed session 7. Sep 12 17:35:23.643671 systemd[1]: Created slice kubepods-besteffort-pod882084cf_d942_48c0_977a_f6a1d3da3bad.slice - libcontainer container kubepods-besteffort-pod882084cf_d942_48c0_977a_f6a1d3da3bad.slice. Sep 12 17:35:23.723202 systemd[1]: Created slice kubepods-besteffort-pod0a11fd3c_ab7c_4108_a513_183df2a0b68d.slice - libcontainer container kubepods-besteffort-pod0a11fd3c_ab7c_4108_a513_183df2a0b68d.slice. Sep 12 17:35:23.742333 kubelet[2505]: I0912 17:35:23.742271 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a11fd3c-ab7c-4108-a513-183df2a0b68d-tigera-ca-bundle\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.742333 kubelet[2505]: I0912 17:35:23.742330 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0a11fd3c-ab7c-4108-a513-183df2a0b68d-var-run-calico\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.742857 kubelet[2505]: I0912 17:35:23.742377 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvdpf\" (UniqueName: \"kubernetes.io/projected/882084cf-d942-48c0-977a-f6a1d3da3bad-kube-api-access-qvdpf\") pod \"calico-typha-6fcb644cf7-qpqsk\" (UID: \"882084cf-d942-48c0-977a-f6a1d3da3bad\") " pod="calico-system/calico-typha-6fcb644cf7-qpqsk" Sep 12 17:35:23.742857 kubelet[2505]: I0912 17:35:23.742461 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0a11fd3c-ab7c-4108-a513-183df2a0b68d-node-certs\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.742857 kubelet[2505]: I0912 17:35:23.742498 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6qx9\" (UniqueName: \"kubernetes.io/projected/0a11fd3c-ab7c-4108-a513-183df2a0b68d-kube-api-access-r6qx9\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.742857 kubelet[2505]: I0912 17:35:23.742548 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0a11fd3c-ab7c-4108-a513-183df2a0b68d-flexvol-driver-host\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.742857 kubelet[2505]: I0912 17:35:23.742574 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a11fd3c-ab7c-4108-a513-183df2a0b68d-var-lib-calico\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.742998 kubelet[2505]: I0912 17:35:23.742594 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882084cf-d942-48c0-977a-f6a1d3da3bad-tigera-ca-bundle\") pod \"calico-typha-6fcb644cf7-qpqsk\" (UID: \"882084cf-d942-48c0-977a-f6a1d3da3bad\") " pod="calico-system/calico-typha-6fcb644cf7-qpqsk" Sep 12 17:35:23.742998 kubelet[2505]: I0912 17:35:23.742632 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0a11fd3c-ab7c-4108-a513-183df2a0b68d-cni-bin-dir\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.742998 kubelet[2505]: I0912 17:35:23.742654 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0a11fd3c-ab7c-4108-a513-183df2a0b68d-cni-net-dir\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.742998 kubelet[2505]: I0912 17:35:23.742674 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/882084cf-d942-48c0-977a-f6a1d3da3bad-typha-certs\") pod \"calico-typha-6fcb644cf7-qpqsk\" (UID: \"882084cf-d942-48c0-977a-f6a1d3da3bad\") " pod="calico-system/calico-typha-6fcb644cf7-qpqsk" Sep 12 17:35:23.742998 kubelet[2505]: I0912 17:35:23.742695 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0a11fd3c-ab7c-4108-a513-183df2a0b68d-policysync\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.743118 kubelet[2505]: I0912 17:35:23.742734 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0a11fd3c-ab7c-4108-a513-183df2a0b68d-cni-log-dir\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.743118 kubelet[2505]: I0912 17:35:23.742799 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a11fd3c-ab7c-4108-a513-183df2a0b68d-lib-modules\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.743118 kubelet[2505]: I0912 17:35:23.742842 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a11fd3c-ab7c-4108-a513-183df2a0b68d-xtables-lock\") pod \"calico-node-7g4sd\" (UID: \"0a11fd3c-ab7c-4108-a513-183df2a0b68d\") " pod="calico-system/calico-node-7g4sd" Sep 12 17:35:23.821719 kubelet[2505]: E0912 17:35:23.821662 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kc8t" podUID="33030e82-9043-4dea-9a42-6edffd5b404a" Sep 12 17:35:23.871315 kubelet[2505]: E0912 17:35:23.871077 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.871315 kubelet[2505]: W0912 17:35:23.871110 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.871315 kubelet[2505]: E0912 17:35:23.871150 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.876290 kubelet[2505]: E0912 17:35:23.875964 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.876290 kubelet[2505]: W0912 17:35:23.875990 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.876290 kubelet[2505]: E0912 17:35:23.876021 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.876562 kubelet[2505]: E0912 17:35:23.876549 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.876612 kubelet[2505]: W0912 17:35:23.876601 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.876679 kubelet[2505]: E0912 17:35:23.876665 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.876993 kubelet[2505]: E0912 17:35:23.876950 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.876993 kubelet[2505]: W0912 17:35:23.876962 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.876993 kubelet[2505]: E0912 17:35:23.876971 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.915108 kubelet[2505]: E0912 17:35:23.915000 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.915108 kubelet[2505]: W0912 17:35:23.915034 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.915108 kubelet[2505]: E0912 17:35:23.915075 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.915416 kubelet[2505]: E0912 17:35:23.915396 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.915416 kubelet[2505]: W0912 17:35:23.915410 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.915482 kubelet[2505]: E0912 17:35:23.915420 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.915792 kubelet[2505]: E0912 17:35:23.915778 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.915916 kubelet[2505]: W0912 17:35:23.915856 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.915916 kubelet[2505]: E0912 17:35:23.915870 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.916639 kubelet[2505]: E0912 17:35:23.916627 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.916910 kubelet[2505]: W0912 17:35:23.916693 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.916910 kubelet[2505]: E0912 17:35:23.916706 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.917291 kubelet[2505]: E0912 17:35:23.917279 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.917349 kubelet[2505]: W0912 17:35:23.917338 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.917400 kubelet[2505]: E0912 17:35:23.917390 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.918940 kubelet[2505]: E0912 17:35:23.918927 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.919100 kubelet[2505]: W0912 17:35:23.919003 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.919100 kubelet[2505]: E0912 17:35:23.919016 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.919274 kubelet[2505]: E0912 17:35:23.919255 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.919274 kubelet[2505]: W0912 17:35:23.919268 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.919327 kubelet[2505]: E0912 17:35:23.919281 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.919476 kubelet[2505]: E0912 17:35:23.919464 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.919476 kubelet[2505]: W0912 17:35:23.919474 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.919531 kubelet[2505]: E0912 17:35:23.919482 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.919710 kubelet[2505]: E0912 17:35:23.919701 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.919710 kubelet[2505]: W0912 17:35:23.919709 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.919783 kubelet[2505]: E0912 17:35:23.919717 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.919966 kubelet[2505]: E0912 17:35:23.919952 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.919966 kubelet[2505]: W0912 17:35:23.919963 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.920035 kubelet[2505]: E0912 17:35:23.919973 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.920511 kubelet[2505]: E0912 17:35:23.920163 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.920511 kubelet[2505]: W0912 17:35:23.920174 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.920511 kubelet[2505]: E0912 17:35:23.920184 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.920643 kubelet[2505]: E0912 17:35:23.920615 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.920643 kubelet[2505]: W0912 17:35:23.920640 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.920706 kubelet[2505]: E0912 17:35:23.920650 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.920947 kubelet[2505]: E0912 17:35:23.920926 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.920947 kubelet[2505]: W0912 17:35:23.920939 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.921003 kubelet[2505]: E0912 17:35:23.920961 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.921173 kubelet[2505]: E0912 17:35:23.921161 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.921199 kubelet[2505]: W0912 17:35:23.921172 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.921199 kubelet[2505]: E0912 17:35:23.921182 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.921479 kubelet[2505]: E0912 17:35:23.921459 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.921479 kubelet[2505]: W0912 17:35:23.921471 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.921530 kubelet[2505]: E0912 17:35:23.921481 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.923992 kubelet[2505]: E0912 17:35:23.923974 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.923992 kubelet[2505]: W0912 17:35:23.923987 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.924109 kubelet[2505]: E0912 17:35:23.923998 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.924277 kubelet[2505]: E0912 17:35:23.924260 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.924277 kubelet[2505]: W0912 17:35:23.924275 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.924360 kubelet[2505]: E0912 17:35:23.924288 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.924551 kubelet[2505]: E0912 17:35:23.924520 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.924551 kubelet[2505]: W0912 17:35:23.924535 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.924551 kubelet[2505]: E0912 17:35:23.924547 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.924816 kubelet[2505]: E0912 17:35:23.924799 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.924816 kubelet[2505]: W0912 17:35:23.924814 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.924874 kubelet[2505]: E0912 17:35:23.924825 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.925075 kubelet[2505]: E0912 17:35:23.925049 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.925075 kubelet[2505]: W0912 17:35:23.925065 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.925124 kubelet[2505]: E0912 17:35:23.925075 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.944518 kubelet[2505]: E0912 17:35:23.944467 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.944518 kubelet[2505]: W0912 17:35:23.944489 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.944518 kubelet[2505]: E0912 17:35:23.944506 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.944725 kubelet[2505]: I0912 17:35:23.944539 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/33030e82-9043-4dea-9a42-6edffd5b404a-socket-dir\") pod \"csi-node-driver-5kc8t\" (UID: \"33030e82-9043-4dea-9a42-6edffd5b404a\") " pod="calico-system/csi-node-driver-5kc8t" Sep 12 17:35:23.944804 kubelet[2505]: E0912 17:35:23.944786 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.944804 kubelet[2505]: W0912 17:35:23.944802 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.944874 kubelet[2505]: E0912 17:35:23.944822 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.944874 kubelet[2505]: I0912 17:35:23.944837 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/33030e82-9043-4dea-9a42-6edffd5b404a-registration-dir\") pod \"csi-node-driver-5kc8t\" (UID: \"33030e82-9043-4dea-9a42-6edffd5b404a\") " pod="calico-system/csi-node-driver-5kc8t" Sep 12 17:35:23.945207 kubelet[2505]: E0912 17:35:23.945171 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.945207 kubelet[2505]: W0912 17:35:23.945198 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.945266 kubelet[2505]: E0912 17:35:23.945226 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.945469 kubelet[2505]: E0912 17:35:23.945454 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.945469 kubelet[2505]: W0912 17:35:23.945465 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.945534 kubelet[2505]: E0912 17:35:23.945482 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.945772 kubelet[2505]: E0912 17:35:23.945737 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.945772 kubelet[2505]: W0912 17:35:23.945751 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.945836 kubelet[2505]: E0912 17:35:23.945784 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.945836 kubelet[2505]: I0912 17:35:23.945818 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/33030e82-9043-4dea-9a42-6edffd5b404a-kubelet-dir\") pod \"csi-node-driver-5kc8t\" (UID: \"33030e82-9043-4dea-9a42-6edffd5b404a\") " pod="calico-system/csi-node-driver-5kc8t" Sep 12 17:35:23.946068 kubelet[2505]: E0912 17:35:23.946052 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.946106 kubelet[2505]: W0912 17:35:23.946069 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.946106 kubelet[2505]: E0912 17:35:23.946084 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.946106 kubelet[2505]: I0912 17:35:23.946098 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g2gw\" (UniqueName: \"kubernetes.io/projected/33030e82-9043-4dea-9a42-6edffd5b404a-kube-api-access-8g2gw\") pod \"csi-node-driver-5kc8t\" (UID: \"33030e82-9043-4dea-9a42-6edffd5b404a\") " pod="calico-system/csi-node-driver-5kc8t" Sep 12 17:35:23.946373 kubelet[2505]: E0912 17:35:23.946358 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.946373 kubelet[2505]: W0912 17:35:23.946371 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.946431 kubelet[2505]: E0912 17:35:23.946387 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.946609 kubelet[2505]: E0912 17:35:23.946598 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.946609 kubelet[2505]: W0912 17:35:23.946607 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.946668 kubelet[2505]: E0912 17:35:23.946631 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.946876 kubelet[2505]: E0912 17:35:23.946865 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.946876 kubelet[2505]: W0912 17:35:23.946875 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.946932 kubelet[2505]: E0912 17:35:23.946887 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.947093 kubelet[2505]: E0912 17:35:23.947082 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.947093 kubelet[2505]: W0912 17:35:23.947090 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.947142 kubelet[2505]: E0912 17:35:23.947103 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.947356 kubelet[2505]: E0912 17:35:23.947336 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.947356 kubelet[2505]: W0912 17:35:23.947353 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.947421 kubelet[2505]: E0912 17:35:23.947369 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.947421 kubelet[2505]: I0912 17:35:23.947387 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/33030e82-9043-4dea-9a42-6edffd5b404a-varrun\") pod \"csi-node-driver-5kc8t\" (UID: \"33030e82-9043-4dea-9a42-6edffd5b404a\") " pod="calico-system/csi-node-driver-5kc8t" Sep 12 17:35:23.947667 kubelet[2505]: E0912 17:35:23.947650 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.947667 kubelet[2505]: W0912 17:35:23.947662 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.947728 kubelet[2505]: E0912 17:35:23.947690 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.947891 kubelet[2505]: E0912 17:35:23.947876 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.947891 kubelet[2505]: W0912 17:35:23.947887 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.947945 kubelet[2505]: E0912 17:35:23.947900 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.948093 kubelet[2505]: E0912 17:35:23.948078 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.948093 kubelet[2505]: W0912 17:35:23.948089 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.948156 kubelet[2505]: E0912 17:35:23.948097 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.948321 kubelet[2505]: E0912 17:35:23.948305 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:23.948321 kubelet[2505]: W0912 17:35:23.948315 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:23.948381 kubelet[2505]: E0912 17:35:23.948323 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:23.952004 kubelet[2505]: E0912 17:35:23.951978 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:23.952895 containerd[1461]: time="2025-09-12T17:35:23.952850253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcb644cf7-qpqsk,Uid:882084cf-d942-48c0-977a-f6a1d3da3bad,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:23.986431 containerd[1461]: time="2025-09-12T17:35:23.986317789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:23.986582 containerd[1461]: time="2025-09-12T17:35:23.986434669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:23.986582 containerd[1461]: time="2025-09-12T17:35:23.986454396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:23.987013 containerd[1461]: time="2025-09-12T17:35:23.986609559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:24.008922 systemd[1]: Started cri-containerd-90d75f776fa8f1a87b9e3651177f6735d99518d64e408f3899b6976d8b35e2e9.scope - libcontainer container 90d75f776fa8f1a87b9e3651177f6735d99518d64e408f3899b6976d8b35e2e9. Sep 12 17:35:24.027409 containerd[1461]: time="2025-09-12T17:35:24.027357761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7g4sd,Uid:0a11fd3c-ab7c-4108-a513-183df2a0b68d,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:24.048332 kubelet[2505]: E0912 17:35:24.048176 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.048332 kubelet[2505]: W0912 17:35:24.048199 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.048332 kubelet[2505]: E0912 17:35:24.048219 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.051185 kubelet[2505]: E0912 17:35:24.051155 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.051185 kubelet[2505]: W0912 17:35:24.051172 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.051280 kubelet[2505]: E0912 17:35:24.051198 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.051659 kubelet[2505]: E0912 17:35:24.051639 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.051659 kubelet[2505]: W0912 17:35:24.051655 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.051725 kubelet[2505]: E0912 17:35:24.051680 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.051991 kubelet[2505]: E0912 17:35:24.051974 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.051991 kubelet[2505]: W0912 17:35:24.051987 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.052069 kubelet[2505]: E0912 17:35:24.052003 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.052298 kubelet[2505]: E0912 17:35:24.052282 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.052298 kubelet[2505]: W0912 17:35:24.052295 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.052376 kubelet[2505]: E0912 17:35:24.052349 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.052608 kubelet[2505]: E0912 17:35:24.052590 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.052608 kubelet[2505]: W0912 17:35:24.052604 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.052830 kubelet[2505]: E0912 17:35:24.052624 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.052972 kubelet[2505]: E0912 17:35:24.052944 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.052972 kubelet[2505]: W0912 17:35:24.052959 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.052972 kubelet[2505]: E0912 17:35:24.052972 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.053294 kubelet[2505]: E0912 17:35:24.053265 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.053294 kubelet[2505]: W0912 17:35:24.053280 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.053358 kubelet[2505]: E0912 17:35:24.053303 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.053699 kubelet[2505]: E0912 17:35:24.053667 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.053699 kubelet[2505]: W0912 17:35:24.053683 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.053779 kubelet[2505]: E0912 17:35:24.053708 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.054039 kubelet[2505]: E0912 17:35:24.054022 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.054039 kubelet[2505]: W0912 17:35:24.054035 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.054124 kubelet[2505]: E0912 17:35:24.054050 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.054273 containerd[1461]: time="2025-09-12T17:35:24.054239268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcb644cf7-qpqsk,Uid:882084cf-d942-48c0-977a-f6a1d3da3bad,Namespace:calico-system,Attempt:0,} returns sandbox id \"90d75f776fa8f1a87b9e3651177f6735d99518d64e408f3899b6976d8b35e2e9\"" Sep 12 17:35:24.054326 kubelet[2505]: E0912 17:35:24.054269 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.054326 kubelet[2505]: W0912 17:35:24.054280 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.054397 kubelet[2505]: E0912 17:35:24.054333 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.054697 kubelet[2505]: E0912 17:35:24.054673 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.054697 kubelet[2505]: W0912 17:35:24.054688 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.054865 kubelet[2505]: E0912 17:35:24.054825 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.054962 kubelet[2505]: E0912 17:35:24.054943 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.054962 kubelet[2505]: W0912 17:35:24.054956 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.055043 kubelet[2505]: E0912 17:35:24.054986 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.055335 kubelet[2505]: E0912 17:35:24.055306 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.055335 kubelet[2505]: W0912 17:35:24.055319 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.055411 kubelet[2505]: E0912 17:35:24.055353 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.055978 kubelet[2505]: E0912 17:35:24.055957 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.055978 kubelet[2505]: W0912 17:35:24.055971 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.056070 kubelet[2505]: E0912 17:35:24.055987 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.056309 kubelet[2505]: E0912 17:35:24.056279 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.056309 kubelet[2505]: W0912 17:35:24.056292 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.056436 kubelet[2505]: E0912 17:35:24.056383 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.056607 kubelet[2505]: E0912 17:35:24.056586 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.056607 kubelet[2505]: W0912 17:35:24.056599 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.056711 kubelet[2505]: E0912 17:35:24.056657 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.057384 kubelet[2505]: E0912 17:35:24.057360 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.057384 kubelet[2505]: W0912 17:35:24.057378 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.057500 kubelet[2505]: E0912 17:35:24.057461 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.057612 kubelet[2505]: E0912 17:35:24.057592 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.057612 kubelet[2505]: W0912 17:35:24.057608 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.057714 kubelet[2505]: E0912 17:35:24.057694 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.057864 kubelet[2505]: E0912 17:35:24.057844 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.057864 kubelet[2505]: W0912 17:35:24.057857 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.057948 kubelet[2505]: E0912 17:35:24.057881 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.058195 kubelet[2505]: E0912 17:35:24.058164 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.058195 kubelet[2505]: W0912 17:35:24.058178 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.058195 kubelet[2505]: E0912 17:35:24.058196 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.059101 kubelet[2505]: E0912 17:35:24.058715 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.059101 kubelet[2505]: W0912 17:35:24.058729 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.059101 kubelet[2505]: E0912 17:35:24.058748 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.059229 kubelet[2505]: E0912 17:35:24.059108 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.059229 kubelet[2505]: W0912 17:35:24.059118 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.059229 kubelet[2505]: E0912 17:35:24.059134 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.059698 kubelet[2505]: E0912 17:35:24.059546 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.059698 kubelet[2505]: W0912 17:35:24.059663 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.059698 kubelet[2505]: E0912 17:35:24.059682 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.060866 kubelet[2505]: E0912 17:35:24.059927 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:24.060866 kubelet[2505]: E0912 17:35:24.060077 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.060866 kubelet[2505]: W0912 17:35:24.060085 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.060866 kubelet[2505]: E0912 17:35:24.060095 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.061565 containerd[1461]: time="2025-09-12T17:35:24.061455254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:24.061565 containerd[1461]: time="2025-09-12T17:35:24.061517160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:24.061565 containerd[1461]: time="2025-09-12T17:35:24.061531257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:24.062856 containerd[1461]: time="2025-09-12T17:35:24.061646053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:24.065056 containerd[1461]: time="2025-09-12T17:35:24.065008819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:35:24.072862 kubelet[2505]: E0912 17:35:24.072838 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:24.072862 kubelet[2505]: W0912 17:35:24.072856 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:24.072985 kubelet[2505]: E0912 17:35:24.072869 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:24.089934 systemd[1]: Started cri-containerd-1c1fc796c9c9803e23aae339933505705b14039fb69eb2ecb4905db10690c7be.scope - libcontainer container 1c1fc796c9c9803e23aae339933505705b14039fb69eb2ecb4905db10690c7be. Sep 12 17:35:24.117114 containerd[1461]: time="2025-09-12T17:35:24.117067232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7g4sd,Uid:0a11fd3c-ab7c-4108-a513-183df2a0b68d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c1fc796c9c9803e23aae339933505705b14039fb69eb2ecb4905db10690c7be\"" Sep 12 17:35:25.283835 kubelet[2505]: E0912 17:35:25.283771 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kc8t" podUID="33030e82-9043-4dea-9a42-6edffd5b404a" Sep 12 17:35:26.758536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271203613.mount: Deactivated successfully. Sep 12 17:35:27.100161 containerd[1461]: time="2025-09-12T17:35:27.100031346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:27.100939 containerd[1461]: time="2025-09-12T17:35:27.100900463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:35:27.101973 containerd[1461]: time="2025-09-12T17:35:27.101929430Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:27.103941 containerd[1461]: time="2025-09-12T17:35:27.103903245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:27.104480 containerd[1461]: time="2025-09-12T17:35:27.104441037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.039294149s" Sep 12 17:35:27.104480 containerd[1461]: time="2025-09-12T17:35:27.104469832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:35:27.105514 containerd[1461]: time="2025-09-12T17:35:27.105486736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:35:27.113590 containerd[1461]: time="2025-09-12T17:35:27.113535448Z" level=info msg="CreateContainer within sandbox \"90d75f776fa8f1a87b9e3651177f6735d99518d64e408f3899b6976d8b35e2e9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:35:27.127112 containerd[1461]: time="2025-09-12T17:35:27.127066570Z" level=info msg="CreateContainer within sandbox \"90d75f776fa8f1a87b9e3651177f6735d99518d64e408f3899b6976d8b35e2e9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ece3846117a3139dc5fd13d9676b440b3a3046dcd9f9c1e85c52e82bc151b26e\"" Sep 12 17:35:27.127638 containerd[1461]: time="2025-09-12T17:35:27.127609572Z" level=info msg="StartContainer for \"ece3846117a3139dc5fd13d9676b440b3a3046dcd9f9c1e85c52e82bc151b26e\"" Sep 12 17:35:27.162899 systemd[1]: Started cri-containerd-ece3846117a3139dc5fd13d9676b440b3a3046dcd9f9c1e85c52e82bc151b26e.scope - libcontainer container ece3846117a3139dc5fd13d9676b440b3a3046dcd9f9c1e85c52e82bc151b26e. Sep 12 17:35:27.206946 containerd[1461]: time="2025-09-12T17:35:27.206904876Z" level=info msg="StartContainer for \"ece3846117a3139dc5fd13d9676b440b3a3046dcd9f9c1e85c52e82bc151b26e\" returns successfully" Sep 12 17:35:27.284966 kubelet[2505]: E0912 17:35:27.284650 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kc8t" podUID="33030e82-9043-4dea-9a42-6edffd5b404a" Sep 12 17:35:27.355163 kubelet[2505]: E0912 17:35:27.355129 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:27.372998 kubelet[2505]: I0912 17:35:27.372931 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fcb644cf7-qpqsk" podStartSLOduration=1.328107744 podStartE2EDuration="4.372906873s" podCreationTimestamp="2025-09-12 17:35:23 +0000 UTC" firstStartedPulling="2025-09-12 17:35:24.060477091 +0000 UTC m=+24.959856513" lastFinishedPulling="2025-09-12 17:35:27.10527622 +0000 UTC m=+28.004655642" observedRunningTime="2025-09-12 17:35:27.372901333 +0000 UTC m=+28.272280775" watchObservedRunningTime="2025-09-12 17:35:27.372906873 +0000 UTC m=+28.272286315" Sep 12 17:35:27.447735 kubelet[2505]: E0912 17:35:27.447695 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.447735 kubelet[2505]: W0912 17:35:27.447725 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.449486 kubelet[2505]: E0912 17:35:27.447749 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.449486 kubelet[2505]: E0912 17:35:27.448006 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.449486 kubelet[2505]: W0912 17:35:27.448013 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.449486 kubelet[2505]: E0912 17:35:27.448022 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.449486 kubelet[2505]: E0912 17:35:27.448195 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.449486 kubelet[2505]: W0912 17:35:27.448203 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.449486 kubelet[2505]: E0912 17:35:27.448212 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.449486 kubelet[2505]: E0912 17:35:27.448379 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.449486 kubelet[2505]: W0912 17:35:27.448386 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.449486 kubelet[2505]: E0912 17:35:27.448395 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.449901 kubelet[2505]: E0912 17:35:27.448584 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.449901 kubelet[2505]: W0912 17:35:27.448591 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.449901 kubelet[2505]: E0912 17:35:27.448600 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.449901 kubelet[2505]: E0912 17:35:27.448950 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.449901 kubelet[2505]: W0912 17:35:27.448958 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.449901 kubelet[2505]: E0912 17:35:27.448967 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.449901 kubelet[2505]: E0912 17:35:27.449274 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.449901 kubelet[2505]: W0912 17:35:27.449282 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.449901 kubelet[2505]: E0912 17:35:27.449291 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.449901 kubelet[2505]: E0912 17:35:27.449501 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.450214 kubelet[2505]: W0912 17:35:27.449510 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.450214 kubelet[2505]: E0912 17:35:27.449520 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.450936 kubelet[2505]: E0912 17:35:27.450904 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.450936 kubelet[2505]: W0912 17:35:27.450921 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.450936 kubelet[2505]: E0912 17:35:27.450930 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.451701 kubelet[2505]: E0912 17:35:27.451255 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.451701 kubelet[2505]: W0912 17:35:27.451269 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.451701 kubelet[2505]: E0912 17:35:27.451280 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.451701 kubelet[2505]: E0912 17:35:27.451605 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.451701 kubelet[2505]: W0912 17:35:27.451614 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.451701 kubelet[2505]: E0912 17:35:27.451624 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.451924 kubelet[2505]: E0912 17:35:27.451905 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.451962 kubelet[2505]: W0912 17:35:27.451933 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.451962 kubelet[2505]: E0912 17:35:27.451945 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.454129 kubelet[2505]: E0912 17:35:27.454111 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.454129 kubelet[2505]: W0912 17:35:27.454124 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.454202 kubelet[2505]: E0912 17:35:27.454136 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.454351 kubelet[2505]: E0912 17:35:27.454338 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.454351 kubelet[2505]: W0912 17:35:27.454349 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.454411 kubelet[2505]: E0912 17:35:27.454358 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.454551 kubelet[2505]: E0912 17:35:27.454536 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.454551 kubelet[2505]: W0912 17:35:27.454548 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.454616 kubelet[2505]: E0912 17:35:27.454573 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.485273 kubelet[2505]: E0912 17:35:27.485229 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.485273 kubelet[2505]: W0912 17:35:27.485259 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.485273 kubelet[2505]: E0912 17:35:27.485285 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.485888 kubelet[2505]: E0912 17:35:27.485856 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.485888 kubelet[2505]: W0912 17:35:27.485873 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.485977 kubelet[2505]: E0912 17:35:27.485895 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.486251 kubelet[2505]: E0912 17:35:27.486229 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.486251 kubelet[2505]: W0912 17:35:27.486247 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.486339 kubelet[2505]: E0912 17:35:27.486269 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.486580 kubelet[2505]: E0912 17:35:27.486550 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.486614 kubelet[2505]: W0912 17:35:27.486579 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.486614 kubelet[2505]: E0912 17:35:27.486600 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.486896 kubelet[2505]: E0912 17:35:27.486879 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.486896 kubelet[2505]: W0912 17:35:27.486895 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.486989 kubelet[2505]: E0912 17:35:27.486958 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.487176 kubelet[2505]: E0912 17:35:27.487159 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.487176 kubelet[2505]: W0912 17:35:27.487172 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.487318 kubelet[2505]: E0912 17:35:27.487295 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.487427 kubelet[2505]: E0912 17:35:27.487412 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.487427 kubelet[2505]: W0912 17:35:27.487425 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.487623 kubelet[2505]: E0912 17:35:27.487529 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.487679 kubelet[2505]: E0912 17:35:27.487661 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.487679 kubelet[2505]: W0912 17:35:27.487677 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.487728 kubelet[2505]: E0912 17:35:27.487694 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.488010 kubelet[2505]: E0912 17:35:27.487991 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.488010 kubelet[2505]: W0912 17:35:27.488006 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.488074 kubelet[2505]: E0912 17:35:27.488029 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.488587 kubelet[2505]: E0912 17:35:27.488544 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.488587 kubelet[2505]: W0912 17:35:27.488582 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.488657 kubelet[2505]: E0912 17:35:27.488603 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.488903 kubelet[2505]: E0912 17:35:27.488876 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.488903 kubelet[2505]: W0912 17:35:27.488893 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.489079 kubelet[2505]: E0912 17:35:27.489009 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.489164 kubelet[2505]: E0912 17:35:27.489144 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.489164 kubelet[2505]: W0912 17:35:27.489158 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.489287 kubelet[2505]: E0912 17:35:27.489266 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.489412 kubelet[2505]: E0912 17:35:27.489392 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.489412 kubelet[2505]: W0912 17:35:27.489407 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.489486 kubelet[2505]: E0912 17:35:27.489423 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.489797 kubelet[2505]: E0912 17:35:27.489708 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.489797 kubelet[2505]: W0912 17:35:27.489725 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.489797 kubelet[2505]: E0912 17:35:27.489741 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.490425 kubelet[2505]: E0912 17:35:27.490403 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.490425 kubelet[2505]: W0912 17:35:27.490419 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.490624 kubelet[2505]: E0912 17:35:27.490534 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.490672 kubelet[2505]: E0912 17:35:27.490659 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.490672 kubelet[2505]: W0912 17:35:27.490670 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.490718 kubelet[2505]: E0912 17:35:27.490682 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.490975 kubelet[2505]: E0912 17:35:27.490957 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.490975 kubelet[2505]: W0912 17:35:27.490972 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.491039 kubelet[2505]: E0912 17:35:27.490984 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:27.491388 kubelet[2505]: E0912 17:35:27.491369 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:27.491423 kubelet[2505]: W0912 17:35:27.491403 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:27.491423 kubelet[2505]: E0912 17:35:27.491414 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.355305 kubelet[2505]: I0912 17:35:28.355264 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:28.355857 kubelet[2505]: E0912 17:35:28.355642 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:28.362972 kubelet[2505]: E0912 17:35:28.362948 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.362972 kubelet[2505]: W0912 17:35:28.362966 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.363080 kubelet[2505]: E0912 17:35:28.362982 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.363253 kubelet[2505]: E0912 17:35:28.363218 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.363253 kubelet[2505]: W0912 17:35:28.363239 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.363253 kubelet[2505]: E0912 17:35:28.363248 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.363479 kubelet[2505]: E0912 17:35:28.363459 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.363479 kubelet[2505]: W0912 17:35:28.363476 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.363565 kubelet[2505]: E0912 17:35:28.363489 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.363790 kubelet[2505]: E0912 17:35:28.363771 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.363790 kubelet[2505]: W0912 17:35:28.363784 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.363860 kubelet[2505]: E0912 17:35:28.363795 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.364002 kubelet[2505]: E0912 17:35:28.363985 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.364002 kubelet[2505]: W0912 17:35:28.363997 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.364002 kubelet[2505]: E0912 17:35:28.364005 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.364195 kubelet[2505]: E0912 17:35:28.364178 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.364195 kubelet[2505]: W0912 17:35:28.364189 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.364260 kubelet[2505]: E0912 17:35:28.364197 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.364397 kubelet[2505]: E0912 17:35:28.364372 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.364397 kubelet[2505]: W0912 17:35:28.364392 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.364397 kubelet[2505]: E0912 17:35:28.364399 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.364595 kubelet[2505]: E0912 17:35:28.364578 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.364595 kubelet[2505]: W0912 17:35:28.364589 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.364595 kubelet[2505]: E0912 17:35:28.364598 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.364800 kubelet[2505]: E0912 17:35:28.364783 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.364800 kubelet[2505]: W0912 17:35:28.364794 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.364800 kubelet[2505]: E0912 17:35:28.364802 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.365028 kubelet[2505]: E0912 17:35:28.365011 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.365028 kubelet[2505]: W0912 17:35:28.365021 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.365028 kubelet[2505]: E0912 17:35:28.365031 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.365223 kubelet[2505]: E0912 17:35:28.365201 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.365223 kubelet[2505]: W0912 17:35:28.365218 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.365223 kubelet[2505]: E0912 17:35:28.365226 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.365412 kubelet[2505]: E0912 17:35:28.365395 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.365412 kubelet[2505]: W0912 17:35:28.365405 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.365412 kubelet[2505]: E0912 17:35:28.365413 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.365619 kubelet[2505]: E0912 17:35:28.365602 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.365619 kubelet[2505]: W0912 17:35:28.365612 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.365619 kubelet[2505]: E0912 17:35:28.365620 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.365820 kubelet[2505]: E0912 17:35:28.365797 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.365820 kubelet[2505]: W0912 17:35:28.365808 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.365820 kubelet[2505]: E0912 17:35:28.365815 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.366075 kubelet[2505]: E0912 17:35:28.366062 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.366075 kubelet[2505]: W0912 17:35:28.366071 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.366122 kubelet[2505]: E0912 17:35:28.366079 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.392574 kubelet[2505]: E0912 17:35:28.392513 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.392574 kubelet[2505]: W0912 17:35:28.392557 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.392574 kubelet[2505]: E0912 17:35:28.392580 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.392907 kubelet[2505]: E0912 17:35:28.392886 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.392907 kubelet[2505]: W0912 17:35:28.392901 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.392961 kubelet[2505]: E0912 17:35:28.392918 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.393190 kubelet[2505]: E0912 17:35:28.393161 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.393190 kubelet[2505]: W0912 17:35:28.393174 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.393190 kubelet[2505]: E0912 17:35:28.393187 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.393460 kubelet[2505]: E0912 17:35:28.393427 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.393460 kubelet[2505]: W0912 17:35:28.393443 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.393460 kubelet[2505]: E0912 17:35:28.393457 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.393677 kubelet[2505]: E0912 17:35:28.393665 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.393677 kubelet[2505]: W0912 17:35:28.393675 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.393740 kubelet[2505]: E0912 17:35:28.393687 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.393901 kubelet[2505]: E0912 17:35:28.393880 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.393901 kubelet[2505]: W0912 17:35:28.393893 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.393984 kubelet[2505]: E0912 17:35:28.393908 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.394149 kubelet[2505]: E0912 17:35:28.394117 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.394149 kubelet[2505]: W0912 17:35:28.394134 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.394194 kubelet[2505]: E0912 17:35:28.394167 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.394362 kubelet[2505]: E0912 17:35:28.394345 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.394362 kubelet[2505]: W0912 17:35:28.394358 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.394453 kubelet[2505]: E0912 17:35:28.394431 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.394646 kubelet[2505]: E0912 17:35:28.394626 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.394646 kubelet[2505]: W0912 17:35:28.394638 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.394646 kubelet[2505]: E0912 17:35:28.394648 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.394971 kubelet[2505]: E0912 17:35:28.394934 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.395019 kubelet[2505]: W0912 17:35:28.394968 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.395019 kubelet[2505]: E0912 17:35:28.395006 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.395303 kubelet[2505]: E0912 17:35:28.395287 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.395303 kubelet[2505]: W0912 17:35:28.395298 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.395371 kubelet[2505]: E0912 17:35:28.395317 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.395663 kubelet[2505]: E0912 17:35:28.395644 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.395663 kubelet[2505]: W0912 17:35:28.395661 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.395715 kubelet[2505]: E0912 17:35:28.395680 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.395921 kubelet[2505]: E0912 17:35:28.395905 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.395921 kubelet[2505]: W0912 17:35:28.395917 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.395978 kubelet[2505]: E0912 17:35:28.395931 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.396222 kubelet[2505]: E0912 17:35:28.396199 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.396222 kubelet[2505]: W0912 17:35:28.396218 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.396286 kubelet[2505]: E0912 17:35:28.396249 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.396492 kubelet[2505]: E0912 17:35:28.396474 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.396522 kubelet[2505]: W0912 17:35:28.396490 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.396522 kubelet[2505]: E0912 17:35:28.396504 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.396809 kubelet[2505]: E0912 17:35:28.396791 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.396809 kubelet[2505]: W0912 17:35:28.396807 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.396873 kubelet[2505]: E0912 17:35:28.396819 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.397115 kubelet[2505]: E0912 17:35:28.397097 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.397115 kubelet[2505]: W0912 17:35:28.397112 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.397168 kubelet[2505]: E0912 17:35:28.397124 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.397874 kubelet[2505]: E0912 17:35:28.397845 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:35:28.397874 kubelet[2505]: W0912 17:35:28.397869 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:35:28.397932 kubelet[2505]: E0912 17:35:28.397879 2505 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:35:28.723713 containerd[1461]: time="2025-09-12T17:35:28.723428570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:28.725146 containerd[1461]: time="2025-09-12T17:35:28.724874572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:35:28.726950 containerd[1461]: time="2025-09-12T17:35:28.726885928Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:28.729599 containerd[1461]: time="2025-09-12T17:35:28.729549881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:28.730549 containerd[1461]: time="2025-09-12T17:35:28.730487205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.624965302s" Sep 12 17:35:28.730549 containerd[1461]: time="2025-09-12T17:35:28.730522912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:35:28.748659 containerd[1461]: time="2025-09-12T17:35:28.748578132Z" level=info msg="CreateContainer within sandbox \"1c1fc796c9c9803e23aae339933505705b14039fb69eb2ecb4905db10690c7be\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:35:28.982364 containerd[1461]: time="2025-09-12T17:35:28.982239669Z" level=info msg="CreateContainer within sandbox \"1c1fc796c9c9803e23aae339933505705b14039fb69eb2ecb4905db10690c7be\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c6505f19e6e22e41ea38c9418e73d21c919642a4176f7cad2ce57872d17896f1\"" Sep 12 17:35:28.986128 containerd[1461]: time="2025-09-12T17:35:28.986092260Z" level=info msg="StartContainer for \"c6505f19e6e22e41ea38c9418e73d21c919642a4176f7cad2ce57872d17896f1\"" Sep 12 17:35:29.021894 systemd[1]: Started cri-containerd-c6505f19e6e22e41ea38c9418e73d21c919642a4176f7cad2ce57872d17896f1.scope - libcontainer container c6505f19e6e22e41ea38c9418e73d21c919642a4176f7cad2ce57872d17896f1. Sep 12 17:35:29.057548 containerd[1461]: time="2025-09-12T17:35:29.057487352Z" level=info msg="StartContainer for \"c6505f19e6e22e41ea38c9418e73d21c919642a4176f7cad2ce57872d17896f1\" returns successfully" Sep 12 17:35:29.076203 systemd[1]: cri-containerd-c6505f19e6e22e41ea38c9418e73d21c919642a4176f7cad2ce57872d17896f1.scope: Deactivated successfully. Sep 12 17:35:29.472842 kubelet[2505]: E0912 17:35:29.472771 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kc8t" podUID="33030e82-9043-4dea-9a42-6edffd5b404a" Sep 12 17:35:29.480001 containerd[1461]: time="2025-09-12T17:35:29.479850925Z" level=info msg="shim disconnected" id=c6505f19e6e22e41ea38c9418e73d21c919642a4176f7cad2ce57872d17896f1 namespace=k8s.io Sep 12 17:35:29.480001 containerd[1461]: time="2025-09-12T17:35:29.479972474Z" level=warning msg="cleaning up after shim disconnected" id=c6505f19e6e22e41ea38c9418e73d21c919642a4176f7cad2ce57872d17896f1 namespace=k8s.io Sep 12 17:35:29.480001 containerd[1461]: time="2025-09-12T17:35:29.479985178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:35:29.969667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6505f19e6e22e41ea38c9418e73d21c919642a4176f7cad2ce57872d17896f1-rootfs.mount: Deactivated successfully. Sep 12 17:35:30.492992 containerd[1461]: time="2025-09-12T17:35:30.492938172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:35:31.284299 kubelet[2505]: E0912 17:35:31.284206 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kc8t" podUID="33030e82-9043-4dea-9a42-6edffd5b404a" Sep 12 17:35:33.284193 kubelet[2505]: E0912 17:35:33.284136 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kc8t" podUID="33030e82-9043-4dea-9a42-6edffd5b404a" Sep 12 17:35:34.533704 containerd[1461]: time="2025-09-12T17:35:34.533630704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:34.534415 containerd[1461]: time="2025-09-12T17:35:34.534365635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:35:34.535553 containerd[1461]: time="2025-09-12T17:35:34.535509786Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:34.538028 containerd[1461]: time="2025-09-12T17:35:34.537998695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:34.538789 containerd[1461]: time="2025-09-12T17:35:34.538726763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.045733377s" Sep 12 17:35:34.538789 containerd[1461]: time="2025-09-12T17:35:34.538779712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:35:34.540976 containerd[1461]: time="2025-09-12T17:35:34.540945885Z" level=info msg="CreateContainer within sandbox \"1c1fc796c9c9803e23aae339933505705b14039fb69eb2ecb4905db10690c7be\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:35:34.556266 containerd[1461]: time="2025-09-12T17:35:34.556194328Z" level=info msg="CreateContainer within sandbox \"1c1fc796c9c9803e23aae339933505705b14039fb69eb2ecb4905db10690c7be\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"33ffb6cca1a6660d86322b590299872f9d41164a1341507802c1a70f22532623\"" Sep 12 17:35:34.556857 containerd[1461]: time="2025-09-12T17:35:34.556827719Z" level=info msg="StartContainer for \"33ffb6cca1a6660d86322b590299872f9d41164a1341507802c1a70f22532623\"" Sep 12 17:35:34.594937 systemd[1]: Started cri-containerd-33ffb6cca1a6660d86322b590299872f9d41164a1341507802c1a70f22532623.scope - libcontainer container 33ffb6cca1a6660d86322b590299872f9d41164a1341507802c1a70f22532623. Sep 12 17:35:34.645039 containerd[1461]: time="2025-09-12T17:35:34.644963714Z" level=info msg="StartContainer for \"33ffb6cca1a6660d86322b590299872f9d41164a1341507802c1a70f22532623\" returns successfully" Sep 12 17:35:35.075865 kubelet[2505]: I0912 17:35:35.075814 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:35.076566 kubelet[2505]: E0912 17:35:35.076303 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:35.284045 kubelet[2505]: E0912 17:35:35.283942 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kc8t" podUID="33030e82-9043-4dea-9a42-6edffd5b404a" Sep 12 17:35:35.505279 kubelet[2505]: E0912 17:35:35.505229 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:35.894394 containerd[1461]: time="2025-09-12T17:35:35.894333053Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:35:35.897982 systemd[1]: cri-containerd-33ffb6cca1a6660d86322b590299872f9d41164a1341507802c1a70f22532623.scope: Deactivated successfully. Sep 12 17:35:35.921542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33ffb6cca1a6660d86322b590299872f9d41164a1341507802c1a70f22532623-rootfs.mount: Deactivated successfully. Sep 12 17:35:35.956527 kubelet[2505]: I0912 17:35:35.956485 2505 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:35:36.238996 containerd[1461]: time="2025-09-12T17:35:36.238572395Z" level=info msg="shim disconnected" id=33ffb6cca1a6660d86322b590299872f9d41164a1341507802c1a70f22532623 namespace=k8s.io Sep 12 17:35:36.238996 containerd[1461]: time="2025-09-12T17:35:36.238666732Z" level=warning msg="cleaning up after shim disconnected" id=33ffb6cca1a6660d86322b590299872f9d41164a1341507802c1a70f22532623 namespace=k8s.io Sep 12 17:35:36.238996 containerd[1461]: time="2025-09-12T17:35:36.238678665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:35:36.260821 systemd[1]: Created slice kubepods-burstable-pod36cb1f24_39ff_404a_a6eb_ecb4d0146f82.slice - libcontainer container kubepods-burstable-pod36cb1f24_39ff_404a_a6eb_ecb4d0146f82.slice. Sep 12 17:35:36.270732 systemd[1]: Created slice kubepods-besteffort-podf6d04b79_edb5_447a_a20c_db3b318d7074.slice - libcontainer container kubepods-besteffort-podf6d04b79_edb5_447a_a20c_db3b318d7074.slice. Sep 12 17:35:36.284823 systemd[1]: Created slice kubepods-besteffort-pod2ee3e697_bf88_4b68_b880_613542cf53e7.slice - libcontainer container kubepods-besteffort-pod2ee3e697_bf88_4b68_b880_613542cf53e7.slice. Sep 12 17:35:36.286808 containerd[1461]: time="2025-09-12T17:35:36.286688163Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:35:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:35:36.294046 systemd[1]: Created slice kubepods-burstable-pod18599629_0241_47c9_8e1f_57fa32503c68.slice - libcontainer container kubepods-burstable-pod18599629_0241_47c9_8e1f_57fa32503c68.slice. Sep 12 17:35:36.303279 systemd[1]: Created slice kubepods-besteffort-pode14f9bae_6755_476b_b594_70b724fc0885.slice - libcontainer container kubepods-besteffort-pode14f9bae_6755_476b_b594_70b724fc0885.slice. Sep 12 17:35:36.310862 systemd[1]: Created slice kubepods-besteffort-pod5d9c942d_8aa0_4a74_bc26_89c34879a081.slice - libcontainer container kubepods-besteffort-pod5d9c942d_8aa0_4a74_bc26_89c34879a081.slice. Sep 12 17:35:36.318635 systemd[1]: Created slice kubepods-besteffort-pod14e6e1cd_60e7_4891_bfed_d3421afafc80.slice - libcontainer container kubepods-besteffort-pod14e6e1cd_60e7_4891_bfed_d3421afafc80.slice. Sep 12 17:35:36.426723 kubelet[2505]: I0912 17:35:36.426628 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z7vt\" (UniqueName: \"kubernetes.io/projected/36cb1f24-39ff-404a-a6eb-ecb4d0146f82-kube-api-access-6z7vt\") pod \"coredns-7c65d6cfc9-km7sb\" (UID: \"36cb1f24-39ff-404a-a6eb-ecb4d0146f82\") " pod="kube-system/coredns-7c65d6cfc9-km7sb" Sep 12 17:35:36.426723 kubelet[2505]: I0912 17:35:36.426702 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5d9c942d-8aa0-4a74-bc26-89c34879a081-goldmane-key-pair\") pod \"goldmane-7988f88666-rwvqt\" (UID: \"5d9c942d-8aa0-4a74-bc26-89c34879a081\") " pod="calico-system/goldmane-7988f88666-rwvqt" Sep 12 17:35:36.426723 kubelet[2505]: I0912 17:35:36.426733 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qdxp\" (UniqueName: \"kubernetes.io/projected/5d9c942d-8aa0-4a74-bc26-89c34879a081-kube-api-access-7qdxp\") pod \"goldmane-7988f88666-rwvqt\" (UID: \"5d9c942d-8aa0-4a74-bc26-89c34879a081\") " pod="calico-system/goldmane-7988f88666-rwvqt" Sep 12 17:35:36.427406 kubelet[2505]: I0912 17:35:36.426798 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6d04b79-edb5-447a-a20c-db3b318d7074-calico-apiserver-certs\") pod \"calico-apiserver-5455bb578c-4b9s4\" (UID: \"f6d04b79-edb5-447a-a20c-db3b318d7074\") " pod="calico-apiserver/calico-apiserver-5455bb578c-4b9s4" Sep 12 17:35:36.427406 kubelet[2505]: I0912 17:35:36.426882 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d9c942d-8aa0-4a74-bc26-89c34879a081-goldmane-ca-bundle\") pod \"goldmane-7988f88666-rwvqt\" (UID: \"5d9c942d-8aa0-4a74-bc26-89c34879a081\") " pod="calico-system/goldmane-7988f88666-rwvqt" Sep 12 17:35:36.427406 kubelet[2505]: I0912 17:35:36.426931 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zp5\" (UniqueName: \"kubernetes.io/projected/2ee3e697-bf88-4b68-b880-613542cf53e7-kube-api-access-r9zp5\") pod \"calico-kube-controllers-b9b56c74c-m8j8k\" (UID: \"2ee3e697-bf88-4b68-b880-613542cf53e7\") " pod="calico-system/calico-kube-controllers-b9b56c74c-m8j8k" Sep 12 17:35:36.427406 kubelet[2505]: I0912 17:35:36.426968 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18599629-0241-47c9-8e1f-57fa32503c68-config-volume\") pod \"coredns-7c65d6cfc9-t8b84\" (UID: \"18599629-0241-47c9-8e1f-57fa32503c68\") " pod="kube-system/coredns-7c65d6cfc9-t8b84" Sep 12 17:35:36.427406 kubelet[2505]: I0912 17:35:36.427110 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfqnr\" (UniqueName: \"kubernetes.io/projected/e14f9bae-6755-476b-b594-70b724fc0885-kube-api-access-tfqnr\") pod \"calico-apiserver-5455bb578c-fhlbd\" (UID: \"e14f9bae-6755-476b-b594-70b724fc0885\") " pod="calico-apiserver/calico-apiserver-5455bb578c-fhlbd" Sep 12 17:35:36.427578 kubelet[2505]: I0912 17:35:36.427172 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d9c942d-8aa0-4a74-bc26-89c34879a081-config\") pod \"goldmane-7988f88666-rwvqt\" (UID: \"5d9c942d-8aa0-4a74-bc26-89c34879a081\") " pod="calico-system/goldmane-7988f88666-rwvqt" Sep 12 17:35:36.427578 kubelet[2505]: I0912 17:35:36.427248 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e6e1cd-60e7-4891-bfed-d3421afafc80-whisker-ca-bundle\") pod \"whisker-6487485898-njsl8\" (UID: \"14e6e1cd-60e7-4891-bfed-d3421afafc80\") " pod="calico-system/whisker-6487485898-njsl8" Sep 12 17:35:36.427578 kubelet[2505]: I0912 17:35:36.427310 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36cb1f24-39ff-404a-a6eb-ecb4d0146f82-config-volume\") pod \"coredns-7c65d6cfc9-km7sb\" (UID: \"36cb1f24-39ff-404a-a6eb-ecb4d0146f82\") " pod="kube-system/coredns-7c65d6cfc9-km7sb" Sep 12 17:35:36.427578 kubelet[2505]: I0912 17:35:36.427350 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wmg\" (UniqueName: \"kubernetes.io/projected/18599629-0241-47c9-8e1f-57fa32503c68-kube-api-access-b8wmg\") pod \"coredns-7c65d6cfc9-t8b84\" (UID: \"18599629-0241-47c9-8e1f-57fa32503c68\") " pod="kube-system/coredns-7c65d6cfc9-t8b84" Sep 12 17:35:36.427578 kubelet[2505]: I0912 17:35:36.427378 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e14f9bae-6755-476b-b594-70b724fc0885-calico-apiserver-certs\") pod \"calico-apiserver-5455bb578c-fhlbd\" (UID: \"e14f9bae-6755-476b-b594-70b724fc0885\") " pod="calico-apiserver/calico-apiserver-5455bb578c-fhlbd" Sep 12 17:35:36.427874 kubelet[2505]: I0912 17:35:36.427412 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-974kd\" (UniqueName: \"kubernetes.io/projected/14e6e1cd-60e7-4891-bfed-d3421afafc80-kube-api-access-974kd\") pod \"whisker-6487485898-njsl8\" (UID: \"14e6e1cd-60e7-4891-bfed-d3421afafc80\") " pod="calico-system/whisker-6487485898-njsl8" Sep 12 17:35:36.427874 kubelet[2505]: I0912 17:35:36.427434 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ee3e697-bf88-4b68-b880-613542cf53e7-tigera-ca-bundle\") pod \"calico-kube-controllers-b9b56c74c-m8j8k\" (UID: \"2ee3e697-bf88-4b68-b880-613542cf53e7\") " pod="calico-system/calico-kube-controllers-b9b56c74c-m8j8k" Sep 12 17:35:36.427874 kubelet[2505]: I0912 17:35:36.427458 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8nkj\" (UniqueName: \"kubernetes.io/projected/f6d04b79-edb5-447a-a20c-db3b318d7074-kube-api-access-k8nkj\") pod \"calico-apiserver-5455bb578c-4b9s4\" (UID: \"f6d04b79-edb5-447a-a20c-db3b318d7074\") " pod="calico-apiserver/calico-apiserver-5455bb578c-4b9s4" Sep 12 17:35:36.427874 kubelet[2505]: I0912 17:35:36.427483 2505 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14e6e1cd-60e7-4891-bfed-d3421afafc80-whisker-backend-key-pair\") pod \"whisker-6487485898-njsl8\" (UID: \"14e6e1cd-60e7-4891-bfed-d3421afafc80\") " pod="calico-system/whisker-6487485898-njsl8" Sep 12 17:35:36.510360 containerd[1461]: time="2025-09-12T17:35:36.510135675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:35:36.567702 kubelet[2505]: E0912 17:35:36.567663 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:36.568814 containerd[1461]: time="2025-09-12T17:35:36.568774107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-km7sb,Uid:36cb1f24-39ff-404a-a6eb-ecb4d0146f82,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:36.578001 containerd[1461]: time="2025-09-12T17:35:36.577952835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5455bb578c-4b9s4,Uid:f6d04b79-edb5-447a-a20c-db3b318d7074,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:35:36.593250 containerd[1461]: time="2025-09-12T17:35:36.593194126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9b56c74c-m8j8k,Uid:2ee3e697-bf88-4b68-b880-613542cf53e7,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:36.597719 kubelet[2505]: E0912 17:35:36.597398 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:36.599606 containerd[1461]: time="2025-09-12T17:35:36.599549750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t8b84,Uid:18599629-0241-47c9-8e1f-57fa32503c68,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:36.624413 containerd[1461]: time="2025-09-12T17:35:36.624328765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5455bb578c-fhlbd,Uid:e14f9bae-6755-476b-b594-70b724fc0885,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:35:36.624667 containerd[1461]: time="2025-09-12T17:35:36.624497943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rwvqt,Uid:5d9c942d-8aa0-4a74-bc26-89c34879a081,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:36.625727 containerd[1461]: time="2025-09-12T17:35:36.625706143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6487485898-njsl8,Uid:14e6e1cd-60e7-4891-bfed-d3421afafc80,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:36.716085 containerd[1461]: time="2025-09-12T17:35:36.716018748Z" level=error msg="Failed to destroy network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.716625 containerd[1461]: time="2025-09-12T17:35:36.716584741Z" level=error msg="encountered an error cleaning up failed sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.716683 containerd[1461]: time="2025-09-12T17:35:36.716657488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-km7sb,Uid:36cb1f24-39ff-404a-a6eb-ecb4d0146f82,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.738847 kubelet[2505]: E0912 17:35:36.738780 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.739112 kubelet[2505]: E0912 17:35:36.738882 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-km7sb" Sep 12 17:35:36.739112 kubelet[2505]: E0912 17:35:36.738914 2505 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-km7sb" Sep 12 17:35:36.739112 kubelet[2505]: E0912 17:35:36.738966 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-km7sb_kube-system(36cb1f24-39ff-404a-a6eb-ecb4d0146f82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-km7sb_kube-system(36cb1f24-39ff-404a-a6eb-ecb4d0146f82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-km7sb" podUID="36cb1f24-39ff-404a-a6eb-ecb4d0146f82" Sep 12 17:35:36.751498 containerd[1461]: time="2025-09-12T17:35:36.751434912Z" level=error msg="Failed to destroy network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.752810 containerd[1461]: time="2025-09-12T17:35:36.752776051Z" level=error msg="encountered an error cleaning up failed sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.752863 containerd[1461]: time="2025-09-12T17:35:36.752841114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5455bb578c-4b9s4,Uid:f6d04b79-edb5-447a-a20c-db3b318d7074,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.753148 kubelet[2505]: E0912 17:35:36.753104 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.753205 kubelet[2505]: E0912 17:35:36.753173 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5455bb578c-4b9s4" Sep 12 17:35:36.753205 kubelet[2505]: E0912 17:35:36.753197 2505 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5455bb578c-4b9s4" Sep 12 17:35:36.753268 kubelet[2505]: E0912 17:35:36.753243 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5455bb578c-4b9s4_calico-apiserver(f6d04b79-edb5-447a-a20c-db3b318d7074)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5455bb578c-4b9s4_calico-apiserver(f6d04b79-edb5-447a-a20c-db3b318d7074)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5455bb578c-4b9s4" podUID="f6d04b79-edb5-447a-a20c-db3b318d7074" Sep 12 17:35:36.870514 containerd[1461]: time="2025-09-12T17:35:36.870454252Z" level=error msg="Failed to destroy network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.871356 containerd[1461]: time="2025-09-12T17:35:36.871124100Z" level=error msg="encountered an error cleaning up failed sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.871356 containerd[1461]: time="2025-09-12T17:35:36.871234568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9b56c74c-m8j8k,Uid:2ee3e697-bf88-4b68-b880-613542cf53e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.872979 kubelet[2505]: E0912 17:35:36.872942 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.873121 kubelet[2505]: E0912 17:35:36.873008 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b9b56c74c-m8j8k" Sep 12 17:35:36.873121 kubelet[2505]: E0912 17:35:36.873029 2505 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b9b56c74c-m8j8k" Sep 12 17:35:36.873121 kubelet[2505]: E0912 17:35:36.873075 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b9b56c74c-m8j8k_calico-system(2ee3e697-bf88-4b68-b880-613542cf53e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b9b56c74c-m8j8k_calico-system(2ee3e697-bf88-4b68-b880-613542cf53e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b9b56c74c-m8j8k" podUID="2ee3e697-bf88-4b68-b880-613542cf53e7" Sep 12 17:35:36.880446 containerd[1461]: time="2025-09-12T17:35:36.880353484Z" level=error msg="Failed to destroy network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.881541 containerd[1461]: time="2025-09-12T17:35:36.881445206Z" level=error msg="encountered an error cleaning up failed sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.881654 containerd[1461]: time="2025-09-12T17:35:36.881625355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t8b84,Uid:18599629-0241-47c9-8e1f-57fa32503c68,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.884103 kubelet[2505]: E0912 17:35:36.882138 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.884103 kubelet[2505]: E0912 17:35:36.882212 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t8b84" Sep 12 17:35:36.884103 kubelet[2505]: E0912 17:35:36.882237 2505 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t8b84" Sep 12 17:35:36.884326 kubelet[2505]: E0912 17:35:36.882289 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-t8b84_kube-system(18599629-0241-47c9-8e1f-57fa32503c68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-t8b84_kube-system(18599629-0241-47c9-8e1f-57fa32503c68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t8b84" podUID="18599629-0241-47c9-8e1f-57fa32503c68" Sep 12 17:35:36.884627 containerd[1461]: time="2025-09-12T17:35:36.884583814Z" level=error msg="Failed to destroy network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.885419 containerd[1461]: time="2025-09-12T17:35:36.885393465Z" level=error msg="encountered an error cleaning up failed sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.885528 containerd[1461]: time="2025-09-12T17:35:36.885509363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6487485898-njsl8,Uid:14e6e1cd-60e7-4891-bfed-d3421afafc80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.886704 kubelet[2505]: E0912 17:35:36.886655 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.886811 kubelet[2505]: E0912 17:35:36.886729 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6487485898-njsl8" Sep 12 17:35:36.886858 kubelet[2505]: E0912 17:35:36.886839 2505 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6487485898-njsl8" Sep 12 17:35:36.887948 kubelet[2505]: E0912 17:35:36.887607 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6487485898-njsl8_calico-system(14e6e1cd-60e7-4891-bfed-d3421afafc80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6487485898-njsl8_calico-system(14e6e1cd-60e7-4891-bfed-d3421afafc80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6487485898-njsl8" podUID="14e6e1cd-60e7-4891-bfed-d3421afafc80" Sep 12 17:35:36.893526 containerd[1461]: time="2025-09-12T17:35:36.893468259Z" level=error msg="Failed to destroy network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.893997 containerd[1461]: time="2025-09-12T17:35:36.893958400Z" level=error msg="encountered an error cleaning up failed sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.894064 containerd[1461]: time="2025-09-12T17:35:36.894035025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rwvqt,Uid:5d9c942d-8aa0-4a74-bc26-89c34879a081,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.894347 kubelet[2505]: E0912 17:35:36.894287 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.894512 kubelet[2505]: E0912 17:35:36.894355 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-rwvqt" Sep 12 17:35:36.894512 kubelet[2505]: E0912 17:35:36.894397 2505 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-rwvqt" Sep 12 17:35:36.894512 kubelet[2505]: E0912 17:35:36.894446 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-rwvqt_calico-system(5d9c942d-8aa0-4a74-bc26-89c34879a081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-rwvqt_calico-system(5d9c942d-8aa0-4a74-bc26-89c34879a081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-rwvqt" podUID="5d9c942d-8aa0-4a74-bc26-89c34879a081" Sep 12 17:35:36.899141 containerd[1461]: time="2025-09-12T17:35:36.899089653Z" level=error msg="Failed to destroy network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.899582 containerd[1461]: time="2025-09-12T17:35:36.899517307Z" level=error msg="encountered an error cleaning up failed sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.899582 containerd[1461]: time="2025-09-12T17:35:36.899566369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5455bb578c-fhlbd,Uid:e14f9bae-6755-476b-b594-70b724fc0885,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.899842 kubelet[2505]: E0912 17:35:36.899788 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:36.899951 kubelet[2505]: E0912 17:35:36.899862 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5455bb578c-fhlbd" Sep 12 17:35:36.899951 kubelet[2505]: E0912 17:35:36.899885 2505 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5455bb578c-fhlbd" Sep 12 17:35:36.899951 kubelet[2505]: E0912 17:35:36.899933 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5455bb578c-fhlbd_calico-apiserver(e14f9bae-6755-476b-b594-70b724fc0885)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5455bb578c-fhlbd_calico-apiserver(e14f9bae-6755-476b-b594-70b724fc0885)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5455bb578c-fhlbd" podUID="e14f9bae-6755-476b-b594-70b724fc0885" Sep 12 17:35:37.290919 systemd[1]: Created slice kubepods-besteffort-pod33030e82_9043_4dea_9a42_6edffd5b404a.slice - libcontainer container kubepods-besteffort-pod33030e82_9043_4dea_9a42_6edffd5b404a.slice. Sep 12 17:35:37.294519 containerd[1461]: time="2025-09-12T17:35:37.294474129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kc8t,Uid:33030e82-9043-4dea-9a42-6edffd5b404a,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:37.360609 containerd[1461]: time="2025-09-12T17:35:37.360543268Z" level=error msg="Failed to destroy network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.361098 containerd[1461]: time="2025-09-12T17:35:37.361055330Z" level=error msg="encountered an error cleaning up failed sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.361200 containerd[1461]: time="2025-09-12T17:35:37.361125593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kc8t,Uid:33030e82-9043-4dea-9a42-6edffd5b404a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.361470 kubelet[2505]: E0912 17:35:37.361433 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.361797 kubelet[2505]: E0912 17:35:37.361596 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kc8t" Sep 12 17:35:37.361797 kubelet[2505]: E0912 17:35:37.361621 2505 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kc8t" Sep 12 17:35:37.361797 kubelet[2505]: E0912 17:35:37.361665 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5kc8t_calico-system(33030e82-9043-4dea-9a42-6edffd5b404a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5kc8t_calico-system(33030e82-9043-4dea-9a42-6edffd5b404a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5kc8t" podUID="33030e82-9043-4dea-9a42-6edffd5b404a" Sep 12 17:35:37.363312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c-shm.mount: Deactivated successfully. Sep 12 17:35:37.511466 kubelet[2505]: I0912 17:35:37.511427 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:35:37.512205 kubelet[2505]: I0912 17:35:37.512172 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:35:37.513543 kubelet[2505]: I0912 17:35:37.513521 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:35:37.515031 kubelet[2505]: I0912 17:35:37.514713 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:35:37.516543 kubelet[2505]: I0912 17:35:37.516522 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:35:37.517643 kubelet[2505]: I0912 17:35:37.517610 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:35:37.518906 kubelet[2505]: I0912 17:35:37.518613 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:35:37.519569 kubelet[2505]: I0912 17:35:37.519519 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:35:37.533750 containerd[1461]: time="2025-09-12T17:35:37.533691311Z" level=info msg="StopPodSandbox for \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\"" Sep 12 17:35:37.533750 containerd[1461]: time="2025-09-12T17:35:37.533747907Z" level=info msg="StopPodSandbox for \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\"" Sep 12 17:35:37.533940 containerd[1461]: time="2025-09-12T17:35:37.533881399Z" level=info msg="StopPodSandbox for \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\"" Sep 12 17:35:37.533940 containerd[1461]: time="2025-09-12T17:35:37.533933847Z" level=info msg="Ensure that sandbox 665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49 in task-service has been cleanup successfully" Sep 12 17:35:37.534029 containerd[1461]: time="2025-09-12T17:35:37.534008497Z" level=info msg="Ensure that sandbox ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de in task-service has been cleanup successfully" Sep 12 17:35:37.534521 containerd[1461]: time="2025-09-12T17:35:37.534473821Z" level=info msg="Ensure that sandbox c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c in task-service has been cleanup successfully" Sep 12 17:35:37.534872 containerd[1461]: time="2025-09-12T17:35:37.534813249Z" level=info msg="StopPodSandbox for \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\"" Sep 12 17:35:37.534976 containerd[1461]: time="2025-09-12T17:35:37.534953773Z" level=info msg="Ensure that sandbox b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60 in task-service has been cleanup successfully" Sep 12 17:35:37.540530 containerd[1461]: time="2025-09-12T17:35:37.540483403Z" level=info msg="StopPodSandbox for \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\"" Sep 12 17:35:37.541284 containerd[1461]: time="2025-09-12T17:35:37.540623907Z" level=info msg="StopPodSandbox for \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\"" Sep 12 17:35:37.541284 containerd[1461]: time="2025-09-12T17:35:37.540980527Z" level=info msg="Ensure that sandbox 2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c in task-service has been cleanup successfully" Sep 12 17:35:37.541499 containerd[1461]: time="2025-09-12T17:35:37.540659423Z" level=info msg="StopPodSandbox for \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\"" Sep 12 17:35:37.541731 containerd[1461]: time="2025-09-12T17:35:37.541712974Z" level=info msg="Ensure that sandbox 83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9 in task-service has been cleanup successfully" Sep 12 17:35:37.542159 containerd[1461]: time="2025-09-12T17:35:37.542044486Z" level=info msg="Ensure that sandbox 93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf in task-service has been cleanup successfully" Sep 12 17:35:37.543272 containerd[1461]: time="2025-09-12T17:35:37.540684341Z" level=info msg="StopPodSandbox for \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\"" Sep 12 17:35:37.545382 containerd[1461]: time="2025-09-12T17:35:37.545129543Z" level=info msg="Ensure that sandbox f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41 in task-service has been cleanup successfully" Sep 12 17:35:37.643650 containerd[1461]: time="2025-09-12T17:35:37.643574675Z" level=error msg="StopPodSandbox for \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\" failed" error="failed to destroy network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.643865 containerd[1461]: time="2025-09-12T17:35:37.643834834Z" level=error msg="StopPodSandbox for \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\" failed" error="failed to destroy network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.643978 containerd[1461]: time="2025-09-12T17:35:37.643941273Z" level=error msg="StopPodSandbox for \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\" failed" error="failed to destroy network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.644630 kubelet[2505]: E0912 17:35:37.644344 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:35:37.644630 kubelet[2505]: E0912 17:35:37.644412 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:35:37.644630 kubelet[2505]: E0912 17:35:37.644444 2505 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41"} Sep 12 17:35:37.644630 kubelet[2505]: E0912 17:35:37.644515 2505 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e14f9bae-6755-476b-b594-70b724fc0885\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:37.644841 kubelet[2505]: E0912 17:35:37.644541 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e14f9bae-6755-476b-b594-70b724fc0885\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5455bb578c-fhlbd" podUID="e14f9bae-6755-476b-b594-70b724fc0885" Sep 12 17:35:37.644841 kubelet[2505]: E0912 17:35:37.644444 2505 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9"} Sep 12 17:35:37.644841 kubelet[2505]: E0912 17:35:37.644579 2505 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d9c942d-8aa0-4a74-bc26-89c34879a081\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:37.644841 kubelet[2505]: E0912 17:35:37.644593 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d9c942d-8aa0-4a74-bc26-89c34879a081\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-rwvqt" podUID="5d9c942d-8aa0-4a74-bc26-89c34879a081" Sep 12 17:35:37.645208 kubelet[2505]: E0912 17:35:37.645155 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:35:37.645208 kubelet[2505]: E0912 17:35:37.645193 2505 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60"} Sep 12 17:35:37.645481 kubelet[2505]: E0912 17:35:37.645222 2505 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ee3e697-bf88-4b68-b880-613542cf53e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:37.645481 kubelet[2505]: E0912 17:35:37.645246 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ee3e697-bf88-4b68-b880-613542cf53e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b9b56c74c-m8j8k" podUID="2ee3e697-bf88-4b68-b880-613542cf53e7" Sep 12 17:35:37.645947 containerd[1461]: time="2025-09-12T17:35:37.645902017Z" level=error msg="StopPodSandbox for \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\" failed" error="failed to destroy network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.646006 containerd[1461]: time="2025-09-12T17:35:37.645908259Z" level=error msg="StopPodSandbox for \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\" failed" error="failed to destroy network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.646043 containerd[1461]: time="2025-09-12T17:35:37.645997647Z" level=error msg="StopPodSandbox for \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\" failed" error="failed to destroy network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.646101 containerd[1461]: time="2025-09-12T17:35:37.646046889Z" level=error msg="StopPodSandbox for \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\" failed" error="failed to destroy network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.646143 containerd[1461]: time="2025-09-12T17:35:37.646086104Z" level=error msg="StopPodSandbox for \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\" failed" error="failed to destroy network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:37.646305 kubelet[2505]: E0912 17:35:37.646241 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:35:37.646305 kubelet[2505]: E0912 17:35:37.646274 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:35:37.646305 kubelet[2505]: E0912 17:35:37.646298 2505 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c"} Sep 12 17:35:37.646467 kubelet[2505]: E0912 17:35:37.646288 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:35:37.646467 kubelet[2505]: E0912 17:35:37.646273 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:35:37.646467 kubelet[2505]: E0912 17:35:37.646346 2505 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf"} Sep 12 17:35:37.646467 kubelet[2505]: E0912 17:35:37.646374 2505 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de"} Sep 12 17:35:37.646467 kubelet[2505]: E0912 17:35:37.646253 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:35:37.646467 kubelet[2505]: E0912 17:35:37.646434 2505 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c"} Sep 12 17:35:37.646670 kubelet[2505]: E0912 17:35:37.646383 2505 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14e6e1cd-60e7-4891-bfed-d3421afafc80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:37.646670 kubelet[2505]: E0912 17:35:37.646457 2505 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"33030e82-9043-4dea-9a42-6edffd5b404a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:37.646670 kubelet[2505]: E0912 17:35:37.646467 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14e6e1cd-60e7-4891-bfed-d3421afafc80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6487485898-njsl8" podUID="14e6e1cd-60e7-4891-bfed-d3421afafc80" Sep 12 17:35:37.646853 kubelet[2505]: E0912 17:35:37.646478 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"33030e82-9043-4dea-9a42-6edffd5b404a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5kc8t" podUID="33030e82-9043-4dea-9a42-6edffd5b404a" Sep 12 17:35:37.646853 kubelet[2505]: E0912 17:35:37.646326 2505 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6d04b79-edb5-447a-a20c-db3b318d7074\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:37.646853 kubelet[2505]: E0912 17:35:37.646413 2505 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18599629-0241-47c9-8e1f-57fa32503c68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:37.646957 kubelet[2505]: E0912 17:35:37.646505 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6d04b79-edb5-447a-a20c-db3b318d7074\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5455bb578c-4b9s4" podUID="f6d04b79-edb5-447a-a20c-db3b318d7074" Sep 12 17:35:37.646957 kubelet[2505]: E0912 17:35:37.646278 2505 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49"} Sep 12 17:35:37.646957 kubelet[2505]: E0912 17:35:37.646529 2505 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36cb1f24-39ff-404a-a6eb-ecb4d0146f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:37.646957 kubelet[2505]: E0912 17:35:37.646521 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18599629-0241-47c9-8e1f-57fa32503c68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t8b84" podUID="18599629-0241-47c9-8e1f-57fa32503c68" Sep 12 17:35:37.647087 kubelet[2505]: E0912 17:35:37.646549 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36cb1f24-39ff-404a-a6eb-ecb4d0146f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-km7sb" podUID="36cb1f24-39ff-404a-a6eb-ecb4d0146f82" Sep 12 17:35:42.393222 systemd[1]: Started sshd@7-10.0.0.87:22-10.0.0.1:52760.service - OpenSSH per-connection server daemon (10.0.0.1:52760). Sep 12 17:35:42.513423 sshd[3836]: Accepted publickey for core from 10.0.0.1 port 52760 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:35:42.515562 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:42.532290 systemd-logind[1445]: New session 8 of user core. Sep 12 17:35:42.544280 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:35:43.035705 sshd[3836]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:43.041818 systemd[1]: sshd@7-10.0.0.87:22-10.0.0.1:52760.service: Deactivated successfully. Sep 12 17:35:43.041873 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:35:43.045133 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:35:43.047180 systemd-logind[1445]: Removed session 8. Sep 12 17:35:43.572371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912216847.mount: Deactivated successfully. Sep 12 17:35:46.975715 containerd[1461]: time="2025-09-12T17:35:46.975621339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:47.013894 containerd[1461]: time="2025-09-12T17:35:47.013810126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:35:47.046287 containerd[1461]: time="2025-09-12T17:35:47.046211325Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:47.066739 containerd[1461]: time="2025-09-12T17:35:47.066637056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:47.067136 containerd[1461]: time="2025-09-12T17:35:47.067103150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.556915779s" Sep 12 17:35:47.067188 containerd[1461]: time="2025-09-12T17:35:47.067142504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:35:47.076011 containerd[1461]: time="2025-09-12T17:35:47.075974963Z" level=info msg="CreateContainer within sandbox \"1c1fc796c9c9803e23aae339933505705b14039fb69eb2ecb4905db10690c7be\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:35:47.395172 containerd[1461]: time="2025-09-12T17:35:47.372089779Z" level=info msg="CreateContainer within sandbox \"1c1fc796c9c9803e23aae339933505705b14039fb69eb2ecb4905db10690c7be\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1bf1cf5cdb13b5c1d9dab6979f5ef218c965b88f3e2b5cdd09fdc8533de1181f\"" Sep 12 17:35:47.396013 containerd[1461]: time="2025-09-12T17:35:47.395955987Z" level=info msg="StartContainer for \"1bf1cf5cdb13b5c1d9dab6979f5ef218c965b88f3e2b5cdd09fdc8533de1181f\"" Sep 12 17:35:47.489977 systemd[1]: Started cri-containerd-1bf1cf5cdb13b5c1d9dab6979f5ef218c965b88f3e2b5cdd09fdc8533de1181f.scope - libcontainer container 1bf1cf5cdb13b5c1d9dab6979f5ef218c965b88f3e2b5cdd09fdc8533de1181f. Sep 12 17:35:48.047208 systemd[1]: Started sshd@8-10.0.0.87:22-10.0.0.1:52776.service - OpenSSH per-connection server daemon (10.0.0.1:52776). Sep 12 17:35:48.527587 containerd[1461]: time="2025-09-12T17:35:48.527528400Z" level=info msg="StartContainer for \"1bf1cf5cdb13b5c1d9dab6979f5ef218c965b88f3e2b5cdd09fdc8533de1181f\" returns successfully" Sep 12 17:35:48.640001 sshd[3896]: Accepted publickey for core from 10.0.0.1 port 52776 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:35:48.640802 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:35:48.640880 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:35:48.642176 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:48.650257 systemd-logind[1445]: New session 9 of user core. Sep 12 17:35:48.658065 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:35:48.728267 kubelet[2505]: I0912 17:35:48.727446 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7g4sd" podStartSLOduration=2.77750522 podStartE2EDuration="25.727426305s" podCreationTimestamp="2025-09-12 17:35:23 +0000 UTC" firstStartedPulling="2025-09-12 17:35:24.118137669 +0000 UTC m=+25.017517091" lastFinishedPulling="2025-09-12 17:35:47.068058734 +0000 UTC m=+47.967438176" observedRunningTime="2025-09-12 17:35:48.727035751 +0000 UTC m=+49.626415183" watchObservedRunningTime="2025-09-12 17:35:48.727426305 +0000 UTC m=+49.626805747" Sep 12 17:35:49.142808 sshd[3896]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:49.146844 systemd[1]: sshd@8-10.0.0.87:22-10.0.0.1:52776.service: Deactivated successfully. Sep 12 17:35:49.148876 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:35:49.149553 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:35:49.150528 systemd-logind[1445]: Removed session 9. Sep 12 17:35:49.284779 containerd[1461]: time="2025-09-12T17:35:49.284625626Z" level=info msg="StopPodSandbox for \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\"" Sep 12 17:35:49.328593 containerd[1461]: time="2025-09-12T17:35:49.328541348Z" level=error msg="StopPodSandbox for \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\" failed" error="failed to destroy network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:49.328741 kubelet[2505]: E0912 17:35:49.328697 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:35:49.328972 kubelet[2505]: E0912 17:35:49.328752 2505 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9"} Sep 12 17:35:49.328972 kubelet[2505]: E0912 17:35:49.328815 2505 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d9c942d-8aa0-4a74-bc26-89c34879a081\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:49.328972 kubelet[2505]: E0912 17:35:49.328843 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d9c942d-8aa0-4a74-bc26-89c34879a081\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-rwvqt" podUID="5d9c942d-8aa0-4a74-bc26-89c34879a081" Sep 12 17:35:49.533806 kubelet[2505]: I0912 17:35:49.533660 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:50.284161 containerd[1461]: time="2025-09-12T17:35:50.284120608Z" level=info msg="StopPodSandbox for \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\"" Sep 12 17:35:50.284645 containerd[1461]: time="2025-09-12T17:35:50.284120658Z" level=info msg="StopPodSandbox for \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\"" Sep 12 17:35:51.283895 containerd[1461]: time="2025-09-12T17:35:51.283836622Z" level=info msg="StopPodSandbox for \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\"" Sep 12 17:35:51.284172 containerd[1461]: time="2025-09-12T17:35:51.283837664Z" level=info msg="StopPodSandbox for \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\"" Sep 12 17:35:52.284651 containerd[1461]: time="2025-09-12T17:35:52.284592704Z" level=info msg="StopPodSandbox for \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\"" Sep 12 17:35:52.285214 containerd[1461]: time="2025-09-12T17:35:52.284648619Z" level=info msg="StopPodSandbox for \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\"" Sep 12 17:35:52.285332 containerd[1461]: time="2025-09-12T17:35:52.285302807Z" level=info msg="StopPodSandbox for \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\"" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:51.537 [INFO][4011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:51.543 [INFO][4011] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" iface="eth0" netns="/var/run/netns/cni-610b77e2-a994-66c0-95f7-46c3910ee538" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:51.544 [INFO][4011] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" iface="eth0" netns="/var/run/netns/cni-610b77e2-a994-66c0-95f7-46c3910ee538" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:51.544 [INFO][4011] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" iface="eth0" netns="/var/run/netns/cni-610b77e2-a994-66c0-95f7-46c3910ee538" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:51.544 [INFO][4011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:51.544 [INFO][4011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:52.360 [INFO][4099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" HandleID="k8s-pod-network.ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:52.360 [INFO][4099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:52.361 [INFO][4099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:52.914 [WARNING][4099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" HandleID="k8s-pod-network.ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:52.914 [INFO][4099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" HandleID="k8s-pod-network.ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:53.004 [INFO][4099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:53.015606 containerd[1461]: 2025-09-12 17:35:53.013 [INFO][4011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:35:53.018672 systemd[1]: run-netns-cni\x2d610b77e2\x2da994\x2d66c0\x2d95f7\x2d46c3910ee538.mount: Deactivated successfully. Sep 12 17:35:53.019009 containerd[1461]: time="2025-09-12T17:35:53.018863425Z" level=info msg="TearDown network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\" successfully" Sep 12 17:35:53.019009 containerd[1461]: time="2025-09-12T17:35:53.018896306Z" level=info msg="StopPodSandbox for \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\" returns successfully" Sep 12 17:35:53.019483 kubelet[2505]: E0912 17:35:53.019301 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:53.019806 containerd[1461]: time="2025-09-12T17:35:53.019627819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t8b84,Uid:18599629-0241-47c9-8e1f-57fa32503c68,Namespace:kube-system,Attempt:1,}" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:51.539 [INFO][4010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:51.541 [INFO][4010] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" iface="eth0" netns="/var/run/netns/cni-f1a935f1-db69-4bbc-a95b-45dfc684ce28" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:51.543 [INFO][4010] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" iface="eth0" netns="/var/run/netns/cni-f1a935f1-db69-4bbc-a95b-45dfc684ce28" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:51.544 [INFO][4010] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" iface="eth0" netns="/var/run/netns/cni-f1a935f1-db69-4bbc-a95b-45dfc684ce28" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:51.544 [INFO][4010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:51.544 [INFO][4010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:52.359 [INFO][4097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:52.360 [INFO][4097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:53.008 [INFO][4097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:53.238 [WARNING][4097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:53.238 [INFO][4097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:53.913 [INFO][4097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:53.920685 containerd[1461]: 2025-09-12 17:35:53.916 [INFO][4010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:35:53.922563 containerd[1461]: time="2025-09-12T17:35:53.920967572Z" level=info msg="TearDown network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\" successfully" Sep 12 17:35:53.922563 containerd[1461]: time="2025-09-12T17:35:53.921016664Z" level=info msg="StopPodSandbox for \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\" returns successfully" Sep 12 17:35:53.922563 containerd[1461]: time="2025-09-12T17:35:53.921856630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6487485898-njsl8,Uid:14e6e1cd-60e7-4891-bfed-d3421afafc80,Namespace:calico-system,Attempt:1,}" Sep 12 17:35:53.924553 systemd[1]: run-netns-cni\x2df1a935f1\x2ddb69\x2d4bbc\x2da95b\x2d45dfc684ce28.mount: Deactivated successfully. Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:52.519 [INFO][4080] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:52.519 [INFO][4080] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" iface="eth0" netns="/var/run/netns/cni-ff8ca3ff-abdc-803e-282e-e89fea104515" Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:52.520 [INFO][4080] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" iface="eth0" netns="/var/run/netns/cni-ff8ca3ff-abdc-803e-282e-e89fea104515" Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:52.520 [INFO][4080] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" iface="eth0" netns="/var/run/netns/cni-ff8ca3ff-abdc-803e-282e-e89fea104515" Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:52.520 [INFO][4080] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:52.520 [INFO][4080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:52.548 [INFO][4171] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" HandleID="k8s-pod-network.c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:52.548 [INFO][4171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:53.913 [INFO][4171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:54.070 [WARNING][4171] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" HandleID="k8s-pod-network.c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:54.070 [INFO][4171] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" HandleID="k8s-pod-network.c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:54.072 [INFO][4171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:54.079884 containerd[1461]: 2025-09-12 17:35:54.075 [INFO][4080] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:35:54.083106 containerd[1461]: time="2025-09-12T17:35:54.082909773Z" level=info msg="TearDown network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\" successfully" Sep 12 17:35:54.083106 containerd[1461]: time="2025-09-12T17:35:54.082953185Z" level=info msg="StopPodSandbox for \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\" returns successfully" Sep 12 17:35:54.084884 systemd[1]: run-netns-cni\x2dff8ca3ff\x2dabdc\x2d803e\x2d282e\x2de89fea104515.mount: Deactivated successfully. Sep 12 17:35:54.085790 containerd[1461]: time="2025-09-12T17:35:54.085742458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5455bb578c-4b9s4,Uid:f6d04b79-edb5-447a-a20c-db3b318d7074,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:35:54.155715 systemd[1]: Started sshd@9-10.0.0.87:22-10.0.0.1:60152.service - OpenSSH per-connection server daemon (10.0.0.1:60152). Sep 12 17:35:54.253819 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 60152 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:35:54.255715 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:54.260274 systemd-logind[1445]: New session 10 of user core. Sep 12 17:35:54.268477 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:52.520 [INFO][4079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:52.520 [INFO][4079] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" iface="eth0" netns="/var/run/netns/cni-29751244-c01c-7bd7-f2e9-dadcd482b121" Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:52.520 [INFO][4079] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" iface="eth0" netns="/var/run/netns/cni-29751244-c01c-7bd7-f2e9-dadcd482b121" Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:52.520 [INFO][4079] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" iface="eth0" netns="/var/run/netns/cni-29751244-c01c-7bd7-f2e9-dadcd482b121" Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:52.521 [INFO][4079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:52.521 [INFO][4079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:52.567 [INFO][4172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" HandleID="k8s-pod-network.2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:52.567 [INFO][4172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:54.072 [INFO][4172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:54.266 [WARNING][4172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" HandleID="k8s-pod-network.2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:54.266 [INFO][4172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" HandleID="k8s-pod-network.2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:54.286 [INFO][4172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:54.295919 containerd[1461]: 2025-09-12 17:35:54.291 [INFO][4079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:35:54.296864 containerd[1461]: time="2025-09-12T17:35:54.296823913Z" level=info msg="TearDown network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\" successfully" Sep 12 17:35:54.296905 containerd[1461]: time="2025-09-12T17:35:54.296861934Z" level=info msg="StopPodSandbox for \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\" returns successfully" Sep 12 17:35:54.299652 containerd[1461]: time="2025-09-12T17:35:54.299611833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kc8t,Uid:33030e82-9043-4dea-9a42-6edffd5b404a,Namespace:calico-system,Attempt:1,}" Sep 12 17:35:54.300291 systemd[1]: run-netns-cni\x2d29751244\x2dc01c\x2d7bd7\x2df2e9\x2ddadcd482b121.mount: Deactivated successfully. Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:53.234 [INFO][4145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:53.236 [INFO][4145] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" iface="eth0" netns="/var/run/netns/cni-dc36bdbf-d071-3850-34c7-1264c3f20c8a" Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:53.237 [INFO][4145] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" iface="eth0" netns="/var/run/netns/cni-dc36bdbf-d071-3850-34c7-1264c3f20c8a" Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:53.237 [INFO][4145] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" iface="eth0" netns="/var/run/netns/cni-dc36bdbf-d071-3850-34c7-1264c3f20c8a" Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:53.237 [INFO][4145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:53.237 [INFO][4145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:53.265 [INFO][4191] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" HandleID="k8s-pod-network.b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:53.265 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:54.286 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:54.341 [WARNING][4191] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" HandleID="k8s-pod-network.b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:54.341 [INFO][4191] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" HandleID="k8s-pod-network.b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:54.343 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:54.350040 containerd[1461]: 2025-09-12 17:35:54.347 [INFO][4145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:35:54.350676 containerd[1461]: time="2025-09-12T17:35:54.350621844Z" level=info msg="TearDown network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\" successfully" Sep 12 17:35:54.350676 containerd[1461]: time="2025-09-12T17:35:54.350653854Z" level=info msg="StopPodSandbox for \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\" returns successfully" Sep 12 17:35:54.351440 containerd[1461]: time="2025-09-12T17:35:54.351411025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9b56c74c-m8j8k,Uid:2ee3e697-bf88-4b68-b880-613542cf53e7,Namespace:calico-system,Attempt:1,}" Sep 12 17:35:54.353238 systemd[1]: run-netns-cni\x2ddc36bdbf\x2dd071\x2d3850\x2d34c7\x2d1264c3f20c8a.mount: Deactivated successfully. Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:53.233 [INFO][4144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:53.235 [INFO][4144] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" iface="eth0" netns="/var/run/netns/cni-09dd0541-1ca5-b2ff-9e9b-b13db8791f35" Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:53.235 [INFO][4144] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" iface="eth0" netns="/var/run/netns/cni-09dd0541-1ca5-b2ff-9e9b-b13db8791f35" Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:53.235 [INFO][4144] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" iface="eth0" netns="/var/run/netns/cni-09dd0541-1ca5-b2ff-9e9b-b13db8791f35" Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:53.235 [INFO][4144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:53.235 [INFO][4144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:53.265 [INFO][4189] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" HandleID="k8s-pod-network.f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:53.265 [INFO][4189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:54.343 [INFO][4189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:54.486 [WARNING][4189] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" HandleID="k8s-pod-network.f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:54.486 [INFO][4189] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" HandleID="k8s-pod-network.f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:54.524 [INFO][4189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:54.531363 containerd[1461]: 2025-09-12 17:35:54.528 [INFO][4144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:35:54.534447 containerd[1461]: time="2025-09-12T17:35:54.534405410Z" level=info msg="TearDown network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\" successfully" Sep 12 17:35:54.534693 containerd[1461]: time="2025-09-12T17:35:54.534530524Z" level=info msg="StopPodSandbox for \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\" returns successfully" Sep 12 17:35:54.536702 systemd[1]: run-netns-cni\x2d09dd0541\x2d1ca5\x2db2ff\x2d9e9b\x2db13db8791f35.mount: Deactivated successfully. Sep 12 17:35:54.537974 containerd[1461]: time="2025-09-12T17:35:54.537018232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5455bb578c-fhlbd,Uid:e14f9bae-6755-476b-b594-70b724fc0885,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:53.233 [INFO][4154] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:53.238 [INFO][4154] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" iface="eth0" netns="/var/run/netns/cni-a16642f6-bfb3-fecd-e086-12b1ec560e92" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:53.238 [INFO][4154] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" iface="eth0" netns="/var/run/netns/cni-a16642f6-bfb3-fecd-e086-12b1ec560e92" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:53.238 [INFO][4154] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" iface="eth0" netns="/var/run/netns/cni-a16642f6-bfb3-fecd-e086-12b1ec560e92" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:53.238 [INFO][4154] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:53.238 [INFO][4154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:53.268 [INFO][4193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" HandleID="k8s-pod-network.665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:53.268 [INFO][4193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:54.524 [INFO][4193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:54.531 [WARNING][4193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" HandleID="k8s-pod-network.665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:54.531 [INFO][4193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" HandleID="k8s-pod-network.665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:54.533 [INFO][4193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:54.544067 containerd[1461]: 2025-09-12 17:35:54.540 [INFO][4154] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:35:54.544561 containerd[1461]: time="2025-09-12T17:35:54.544263256Z" level=info msg="TearDown network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\" successfully" Sep 12 17:35:54.544561 containerd[1461]: time="2025-09-12T17:35:54.544294996Z" level=info msg="StopPodSandbox for \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\" returns successfully" Sep 12 17:35:54.544721 kubelet[2505]: E0912 17:35:54.544684 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:54.546971 containerd[1461]: time="2025-09-12T17:35:54.546713603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-km7sb,Uid:36cb1f24-39ff-404a-a6eb-ecb4d0146f82,Namespace:kube-system,Attempt:1,}" Sep 12 17:35:54.547814 systemd[1]: run-netns-cni\x2da16642f6\x2dbfb3\x2dfecd\x2de086\x2d12b1ec560e92.mount: Deactivated successfully. Sep 12 17:35:54.577282 sshd[4229]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:54.582066 systemd[1]: sshd@9-10.0.0.87:22-10.0.0.1:60152.service: Deactivated successfully. Sep 12 17:35:54.584308 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:35:54.585075 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:35:54.585994 systemd-logind[1445]: Removed session 10. Sep 12 17:35:55.419849 systemd-networkd[1402]: cali34d88c760b5: Link UP Sep 12 17:35:55.420632 systemd-networkd[1402]: cali34d88c760b5: Gained carrier Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:53.948 [INFO][4213] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:54.224 [INFO][4213] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0 coredns-7c65d6cfc9- kube-system 18599629-0241-47c9-8e1f-57fa32503c68 957 0 2025-09-12 17:35:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-t8b84 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali34d88c760b5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8b84" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8b84-" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:54.224 [INFO][4213] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8b84" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:54.315 [INFO][4233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" HandleID="k8s-pod-network.d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:54.315 [INFO][4233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" HandleID="k8s-pod-network.d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000518960), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-t8b84", "timestamp":"2025-09-12 17:35:54.315537856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:54.315 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:54.533 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:54.533 [INFO][4233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:54.607 [INFO][4233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" host="localhost" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:54.871 [INFO][4233] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.040 [INFO][4233] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.043 [INFO][4233] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.044 [INFO][4233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.044 [INFO][4233] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" host="localhost" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.046 [INFO][4233] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.114 [INFO][4233] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" host="localhost" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.376 [INFO][4233] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" host="localhost" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.376 [INFO][4233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" host="localhost" Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.376 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:55.771230 containerd[1461]: 2025-09-12 17:35:55.376 [INFO][4233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" HandleID="k8s-pod-network.d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:55.772083 containerd[1461]: 2025-09-12 17:35:55.380 [INFO][4213] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8b84" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18599629-0241-47c9-8e1f-57fa32503c68", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-t8b84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34d88c760b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:55.772083 containerd[1461]: 2025-09-12 17:35:55.380 [INFO][4213] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8b84" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:55.772083 containerd[1461]: 2025-09-12 17:35:55.380 [INFO][4213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34d88c760b5 ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8b84" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:55.772083 containerd[1461]: 2025-09-12 17:35:55.419 [INFO][4213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8b84" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:55.772083 containerd[1461]: 2025-09-12 17:35:55.419 [INFO][4213] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8b84" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18599629-0241-47c9-8e1f-57fa32503c68", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a", Pod:"coredns-7c65d6cfc9-t8b84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34d88c760b5", MAC:"2e:f7:6d:c1:51:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:55.772083 containerd[1461]: 2025-09-12 17:35:55.754 [INFO][4213] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8b84" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:35:56.327836 kernel: bpftool[4399]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:35:56.540883 systemd-networkd[1402]: cali34d88c760b5: Gained IPv6LL Sep 12 17:35:56.621719 systemd-networkd[1402]: vxlan.calico: Link UP Sep 12 17:35:56.621730 systemd-networkd[1402]: vxlan.calico: Gained carrier Sep 12 17:35:57.168688 containerd[1461]: time="2025-09-12T17:35:57.168566818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:57.169396 containerd[1461]: time="2025-09-12T17:35:57.168673217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:57.169396 containerd[1461]: time="2025-09-12T17:35:57.168692153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:57.169396 containerd[1461]: time="2025-09-12T17:35:57.168831754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:57.196941 systemd[1]: Started cri-containerd-d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a.scope - libcontainer container d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a. Sep 12 17:35:57.210150 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:57.235080 containerd[1461]: time="2025-09-12T17:35:57.235043513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t8b84,Uid:18599629-0241-47c9-8e1f-57fa32503c68,Namespace:kube-system,Attempt:1,} returns sandbox id \"d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a\"" Sep 12 17:35:57.235699 kubelet[2505]: E0912 17:35:57.235661 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:35:57.240938 containerd[1461]: time="2025-09-12T17:35:57.240891124Z" level=info msg="CreateContainer within sandbox \"d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:35:58.446752 systemd-networkd[1402]: cali4ced7edac32: Link UP Sep 12 17:35:58.448715 systemd-networkd[1402]: cali4ced7edac32: Gained carrier Sep 12 17:35:58.519963 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.095 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6487485898--njsl8-eth0 whisker-6487485898- calico-system 14e6e1cd-60e7-4891-bfed-d3421afafc80 961 0 2025-09-12 17:35:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6487485898 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6487485898-njsl8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4ced7edac32 [] [] }} ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Namespace="calico-system" Pod="whisker-6487485898-njsl8" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.095 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Namespace="calico-system" Pod="whisker-6487485898-njsl8" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.152 [INFO][4531] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.152 [INFO][4531] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6487485898-njsl8", "timestamp":"2025-09-12 17:35:58.152722313 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.153 [INFO][4531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.153 [INFO][4531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.153 [INFO][4531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.160 [INFO][4531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" host="localhost" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.165 [INFO][4531] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.169 [INFO][4531] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.173 [INFO][4531] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.176 [INFO][4531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.176 [INFO][4531] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" host="localhost" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.178 [INFO][4531] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.208 [INFO][4531] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" host="localhost" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.434 [INFO][4531] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" host="localhost" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.434 [INFO][4531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" host="localhost" Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.434 [INFO][4531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:58.756359 containerd[1461]: 2025-09-12 17:35:58.434 [INFO][4531] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:58.757973 containerd[1461]: 2025-09-12 17:35:58.440 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Namespace="calico-system" Pod="whisker-6487485898-njsl8" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6487485898--njsl8-eth0", GenerateName:"whisker-6487485898-", Namespace:"calico-system", SelfLink:"", UID:"14e6e1cd-60e7-4891-bfed-d3421afafc80", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6487485898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6487485898-njsl8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4ced7edac32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:58.757973 containerd[1461]: 2025-09-12 17:35:58.440 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Namespace="calico-system" Pod="whisker-6487485898-njsl8" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:58.757973 containerd[1461]: 2025-09-12 17:35:58.440 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ced7edac32 ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Namespace="calico-system" Pod="whisker-6487485898-njsl8" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:58.757973 containerd[1461]: 2025-09-12 17:35:58.448 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Namespace="calico-system" Pod="whisker-6487485898-njsl8" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:58.757973 containerd[1461]: 2025-09-12 17:35:58.452 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Namespace="calico-system" Pod="whisker-6487485898-njsl8" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6487485898--njsl8-eth0", GenerateName:"whisker-6487485898-", Namespace:"calico-system", SelfLink:"", UID:"14e6e1cd-60e7-4891-bfed-d3421afafc80", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6487485898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f", Pod:"whisker-6487485898-njsl8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4ced7edac32", MAC:"ee:9b:66:43:9f:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:58.757973 containerd[1461]: 2025-09-12 17:35:58.752 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Namespace="calico-system" Pod="whisker-6487485898-njsl8" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:35:59.265773 containerd[1461]: time="2025-09-12T17:35:59.265456700Z" level=info msg="StopPodSandbox for \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\"" Sep 12 17:35:59.355877 containerd[1461]: time="2025-09-12T17:35:59.355746429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:59.355877 containerd[1461]: time="2025-09-12T17:35:59.355823293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:59.355877 containerd[1461]: time="2025-09-12T17:35:59.355838211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:59.356435 containerd[1461]: time="2025-09-12T17:35:59.356253379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:59.384933 systemd[1]: Started cri-containerd-8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f.scope - libcontainer container 8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f. Sep 12 17:35:59.601450 systemd[1]: Started sshd@10-10.0.0.87:22-10.0.0.1:60168.service - OpenSSH per-connection server daemon (10.0.0.1:60168). Sep 12 17:35:59.609032 systemd-networkd[1402]: cali68726cad122: Link UP Sep 12 17:35:59.610985 systemd-networkd[1402]: cali68726cad122: Gained carrier Sep 12 17:35:59.693012 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:35:59.727424 containerd[1461]: time="2025-09-12T17:35:59.727383381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6487485898-njsl8,Uid:14e6e1cd-60e7-4891-bfed-d3421afafc80,Namespace:calico-system,Attempt:1,} returns sandbox id \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\"" Sep 12 17:35:59.729094 containerd[1461]: time="2025-09-12T17:35:59.728994603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:35:59.749558 sshd[4681]: Accepted publickey for core from 10.0.0.1 port 60168 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:35:59.751586 sshd[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:59.755529 systemd-logind[1445]: New session 11 of user core. Sep 12 17:35:59.765052 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:58.437 [INFO][4541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0 calico-apiserver-5455bb578c- calico-apiserver f6d04b79-edb5-447a-a20c-db3b318d7074 965 0 2025-09-12 17:35:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5455bb578c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5455bb578c-4b9s4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali68726cad122 [] [] }} ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-4b9s4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:58.437 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-4b9s4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:58.948 [INFO][4585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" HandleID="k8s-pod-network.7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:58.948 [INFO][4585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" HandleID="k8s-pod-network.7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5455bb578c-4b9s4", "timestamp":"2025-09-12 17:35:58.94810457 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:58.948 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:58.948 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:58.948 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.004 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" host="localhost" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.009 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.013 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.015 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.017 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.017 [INFO][4585] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" host="localhost" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.019 [INFO][4585] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6 Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.179 [INFO][4585] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" host="localhost" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.593 [INFO][4585] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" host="localhost" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.593 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" host="localhost" Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.593 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:59.776665 containerd[1461]: 2025-09-12 17:35:59.593 [INFO][4585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" HandleID="k8s-pod-network.7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:59.777651 containerd[1461]: 2025-09-12 17:35:59.601 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-4b9s4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0", GenerateName:"calico-apiserver-5455bb578c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6d04b79-edb5-447a-a20c-db3b318d7074", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5455bb578c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5455bb578c-4b9s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68726cad122", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:59.777651 containerd[1461]: 2025-09-12 17:35:59.601 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-4b9s4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:59.777651 containerd[1461]: 2025-09-12 17:35:59.601 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68726cad122 ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-4b9s4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:59.777651 containerd[1461]: 2025-09-12 17:35:59.612 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-4b9s4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:59.777651 containerd[1461]: 2025-09-12 17:35:59.613 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-4b9s4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0", GenerateName:"calico-apiserver-5455bb578c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6d04b79-edb5-447a-a20c-db3b318d7074", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5455bb578c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6", Pod:"calico-apiserver-5455bb578c-4b9s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68726cad122", MAC:"4e:d1:8e:a8:50:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:59.777651 containerd[1461]: 2025-09-12 17:35:59.771 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-4b9s4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:35:59.801863 systemd-networkd[1402]: cali4ced7edac32: Gained IPv6LL Sep 12 17:36:00.182613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468789046.mount: Deactivated successfully. Sep 12 17:36:00.361735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2074938817.mount: Deactivated successfully. Sep 12 17:36:00.404538 sshd[4681]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:00.409427 systemd[1]: sshd@10-10.0.0.87:22-10.0.0.1:60168.service: Deactivated successfully. Sep 12 17:36:00.411359 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:36:00.412073 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:36:00.412884 systemd-logind[1445]: Removed session 11. Sep 12 17:36:00.437110 containerd[1461]: time="2025-09-12T17:36:00.436924480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:00.437110 containerd[1461]: time="2025-09-12T17:36:00.437010682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:00.437110 containerd[1461]: time="2025-09-12T17:36:00.437026443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:00.438190 containerd[1461]: time="2025-09-12T17:36:00.438142302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:00.466493 systemd[1]: Started cri-containerd-7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6.scope - libcontainer container 7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6. Sep 12 17:36:00.477331 systemd-networkd[1402]: calia3612bb6eb4: Link UP Sep 12 17:36:00.478242 systemd-networkd[1402]: calia3612bb6eb4: Gained carrier Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:35:59.604 [WARNING][4637] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18599629-0241-47c9-8e1f-57fa32503c68", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a", Pod:"coredns-7c65d6cfc9-t8b84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34d88c760b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:35:59.606 [INFO][4637] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:35:59.606 [INFO][4637] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" iface="eth0" netns="" Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:35:59.607 [INFO][4637] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:35:59.607 [INFO][4637] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:35:59.653 [INFO][4684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" HandleID="k8s-pod-network.ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:35:59.653 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:36:00.464 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:36:00.474 [WARNING][4684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" HandleID="k8s-pod-network.ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:36:00.474 [INFO][4684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" HandleID="k8s-pod-network.ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:36:00.476 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:00.484340 containerd[1461]: 2025-09-12 17:36:00.480 [INFO][4637] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:36:00.485145 containerd[1461]: time="2025-09-12T17:36:00.484958434Z" level=info msg="TearDown network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\" successfully" Sep 12 17:36:00.485145 containerd[1461]: time="2025-09-12T17:36:00.484996225Z" level=info msg="StopPodSandbox for \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\" returns successfully" Sep 12 17:36:00.491347 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:00.496190 containerd[1461]: time="2025-09-12T17:36:00.496129739Z" level=info msg="RemovePodSandbox for \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\"" Sep 12 17:36:00.500174 containerd[1461]: time="2025-09-12T17:36:00.500132163Z" level=info msg="Forcibly stopping sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\"" Sep 12 17:36:00.522915 containerd[1461]: time="2025-09-12T17:36:00.522797661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5455bb578c-4b9s4,Uid:f6d04b79-edb5-447a-a20c-db3b318d7074,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6\"" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:58.437 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5kc8t-eth0 csi-node-driver- calico-system 33030e82-9043-4dea-9a42-6edffd5b404a 964 0 2025-09-12 17:35:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5kc8t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia3612bb6eb4 [] [] }} ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Namespace="calico-system" Pod="csi-node-driver-5kc8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kc8t-" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:58.437 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Namespace="calico-system" Pod="csi-node-driver-5kc8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:58.975 [INFO][4587] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" HandleID="k8s-pod-network.8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:58.975 [INFO][4587] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" HandleID="k8s-pod-network.8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5kc8t", "timestamp":"2025-09-12 17:35:58.975784628 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:58.976 [INFO][4587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:59.594 [INFO][4587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:59.594 [INFO][4587] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:59.773 [INFO][4587] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" host="localhost" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:59.850 [INFO][4587] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:59.855 [INFO][4587] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:59.857 [INFO][4587] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:59.859 [INFO][4587] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:59.859 [INFO][4587] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" host="localhost" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:35:59.862 [INFO][4587] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:36:00.396 [INFO][4587] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" host="localhost" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:36:00.463 [INFO][4587] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" host="localhost" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:36:00.463 [INFO][4587] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" host="localhost" Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:36:00.463 [INFO][4587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:00.802011 containerd[1461]: 2025-09-12 17:36:00.463 [INFO][4587] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" HandleID="k8s-pod-network.8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:36:00.803554 containerd[1461]: 2025-09-12 17:36:00.468 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Namespace="calico-system" Pod="csi-node-driver-5kc8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kc8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5kc8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33030e82-9043-4dea-9a42-6edffd5b404a", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5kc8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3612bb6eb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:00.803554 containerd[1461]: 2025-09-12 17:36:00.468 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Namespace="calico-system" Pod="csi-node-driver-5kc8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:36:00.803554 containerd[1461]: 2025-09-12 17:36:00.468 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3612bb6eb4 ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Namespace="calico-system" Pod="csi-node-driver-5kc8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:36:00.803554 containerd[1461]: 2025-09-12 17:36:00.480 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Namespace="calico-system" Pod="csi-node-driver-5kc8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:36:00.803554 containerd[1461]: 2025-09-12 17:36:00.483 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Namespace="calico-system" Pod="csi-node-driver-5kc8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kc8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5kc8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33030e82-9043-4dea-9a42-6edffd5b404a", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a", Pod:"csi-node-driver-5kc8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3612bb6eb4", MAC:"4e:ec:2e:a4:83:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:00.803554 containerd[1461]: 2025-09-12 17:36:00.798 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a" Namespace="calico-system" Pod="csi-node-driver-5kc8t" WorkloadEndpoint="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:36:01.133730 containerd[1461]: time="2025-09-12T17:36:01.133645294Z" level=info msg="CreateContainer within sandbox \"d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ea15fb6094c3bd419fcc85930d3d315a8d567619d305caa185d3a61b5306b4d\"" Sep 12 17:36:01.135345 containerd[1461]: time="2025-09-12T17:36:01.134253378Z" level=info msg="StartContainer for \"9ea15fb6094c3bd419fcc85930d3d315a8d567619d305caa185d3a61b5306b4d\"" Sep 12 17:36:01.168992 systemd[1]: Started cri-containerd-9ea15fb6094c3bd419fcc85930d3d315a8d567619d305caa185d3a61b5306b4d.scope - libcontainer container 9ea15fb6094c3bd419fcc85930d3d315a8d567619d305caa185d3a61b5306b4d. Sep 12 17:36:01.272950 systemd-networkd[1402]: cali68726cad122: Gained IPv6LL Sep 12 17:36:01.382413 containerd[1461]: time="2025-09-12T17:36:01.382227206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:01.382413 containerd[1461]: time="2025-09-12T17:36:01.382275810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:01.382413 containerd[1461]: time="2025-09-12T17:36:01.382285418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:01.382413 containerd[1461]: time="2025-09-12T17:36:01.382375192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:01.404937 systemd[1]: Started cri-containerd-8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a.scope - libcontainer container 8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a. Sep 12 17:36:01.412518 containerd[1461]: time="2025-09-12T17:36:01.412471532Z" level=info msg="StartContainer for \"9ea15fb6094c3bd419fcc85930d3d315a8d567619d305caa185d3a61b5306b4d\" returns successfully" Sep 12 17:36:01.418652 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:01.432075 containerd[1461]: time="2025-09-12T17:36:01.432019606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kc8t,Uid:33030e82-9043-4dea-9a42-6edffd5b404a,Namespace:calico-system,Attempt:1,} returns sandbox id \"8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a\"" Sep 12 17:36:01.557598 systemd-networkd[1402]: cali86693cd83ab: Link UP Sep 12 17:36:01.557828 systemd-networkd[1402]: cali86693cd83ab: Gained carrier Sep 12 17:36:01.567313 kubelet[2505]: E0912 17:36:01.567250 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:35:59.599 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0 calico-kube-controllers-b9b56c74c- calico-system 2ee3e697-bf88-4b68-b880-613542cf53e7 978 0 2025-09-12 17:35:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b9b56c74c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b9b56c74c-m8j8k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali86693cd83ab [] [] }} ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Namespace="calico-system" Pod="calico-kube-controllers-b9b56c74c-m8j8k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:35:59.599 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Namespace="calico-system" Pod="calico-kube-controllers-b9b56c74c-m8j8k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:35:59.801 [INFO][4702] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" HandleID="k8s-pod-network.876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:35:59.801 [INFO][4702] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" HandleID="k8s-pod-network.876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011e5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b9b56c74c-m8j8k", "timestamp":"2025-09-12 17:35:59.801546076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:35:59.801 [INFO][4702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:00.476 [INFO][4702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:00.476 [INFO][4702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:00.484 [INFO][4702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" host="localhost" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:00.490 [INFO][4702] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:00.799 [INFO][4702] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:01.163 [INFO][4702] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:01.344 [INFO][4702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:01.344 [INFO][4702] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" host="localhost" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:01.345 [INFO][4702] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:01.420 [INFO][4702] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" host="localhost" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:01.550 [INFO][4702] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" host="localhost" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:01.550 [INFO][4702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" host="localhost" Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:01.550 [INFO][4702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:01.776693 containerd[1461]: 2025-09-12 17:36:01.550 [INFO][4702] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" HandleID="k8s-pod-network.876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:36:01.777710 containerd[1461]: 2025-09-12 17:36:01.553 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Namespace="calico-system" Pod="calico-kube-controllers-b9b56c74c-m8j8k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0", GenerateName:"calico-kube-controllers-b9b56c74c-", Namespace:"calico-system", SelfLink:"", UID:"2ee3e697-bf88-4b68-b880-613542cf53e7", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9b56c74c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b9b56c74c-m8j8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86693cd83ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:01.777710 containerd[1461]: 2025-09-12 17:36:01.553 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Namespace="calico-system" Pod="calico-kube-controllers-b9b56c74c-m8j8k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:36:01.777710 containerd[1461]: 2025-09-12 17:36:01.553 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86693cd83ab ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Namespace="calico-system" Pod="calico-kube-controllers-b9b56c74c-m8j8k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:36:01.777710 containerd[1461]: 2025-09-12 17:36:01.556 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Namespace="calico-system" Pod="calico-kube-controllers-b9b56c74c-m8j8k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:36:01.777710 containerd[1461]: 2025-09-12 17:36:01.556 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Namespace="calico-system" Pod="calico-kube-controllers-b9b56c74c-m8j8k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0", GenerateName:"calico-kube-controllers-b9b56c74c-", Namespace:"calico-system", SelfLink:"", UID:"2ee3e697-bf88-4b68-b880-613542cf53e7", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9b56c74c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f", Pod:"calico-kube-controllers-b9b56c74c-m8j8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86693cd83ab", MAC:"52:1d:2e:fc:4e:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:01.777710 containerd[1461]: 2025-09-12 17:36:01.773 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f" Namespace="calico-system" Pod="calico-kube-controllers-b9b56c74c-m8j8k" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:36:01.816568 containerd[1461]: time="2025-09-12T17:36:01.816465533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:01.816568 containerd[1461]: time="2025-09-12T17:36:01.816525529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:01.816568 containerd[1461]: time="2025-09-12T17:36:01.816537633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:01.817080 containerd[1461]: time="2025-09-12T17:36:01.816626685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:01.827837 kubelet[2505]: I0912 17:36:01.827745 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-t8b84" podStartSLOduration=57.827719663 podStartE2EDuration="57.827719663s" podCreationTimestamp="2025-09-12 17:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:36:01.825958091 +0000 UTC m=+62.725337523" watchObservedRunningTime="2025-09-12 17:36:01.827719663 +0000 UTC m=+62.727099095" Sep 12 17:36:01.850947 systemd[1]: Started cri-containerd-876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f.scope - libcontainer container 876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f. Sep 12 17:36:01.867155 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:01.895818 containerd[1461]: time="2025-09-12T17:36:01.895774163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b9b56c74c-m8j8k,Uid:2ee3e697-bf88-4b68-b880-613542cf53e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f\"" Sep 12 17:36:01.959094 systemd-networkd[1402]: cali478ce5a866d: Link UP Sep 12 17:36:01.960169 systemd-networkd[1402]: cali478ce5a866d: Gained carrier Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.341 [WARNING][4784] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"18599629-0241-47c9-8e1f-57fa32503c68", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d02268b9a0871769856ff8dc5e3774e35ef85ab4fb7a743c1d98b7857cd5a99a", Pod:"coredns-7c65d6cfc9-t8b84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali34d88c760b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.345 [INFO][4784] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.345 [INFO][4784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" iface="eth0" netns="" Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.345 [INFO][4784] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.346 [INFO][4784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.372 [INFO][4845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" HandleID="k8s-pod-network.ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.373 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.953 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.960 [WARNING][4845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" HandleID="k8s-pod-network.ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.960 [INFO][4845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" HandleID="k8s-pod-network.ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Workload="localhost-k8s-coredns--7c65d6cfc9--t8b84-eth0" Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.962 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:01.971592 containerd[1461]: 2025-09-12 17:36:01.966 [INFO][4784] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de" Sep 12 17:36:01.972291 containerd[1461]: time="2025-09-12T17:36:01.971640084Z" level=info msg="TearDown network for sandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\" successfully" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:35:59.611 [INFO][4616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0 calico-apiserver-5455bb578c- calico-apiserver e14f9bae-6755-476b-b594-70b724fc0885 977 0 2025-09-12 17:35:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5455bb578c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5455bb578c-fhlbd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali478ce5a866d [] [] }} ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-fhlbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:35:59.611 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-fhlbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:35:59.875 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" HandleID="k8s-pod-network.dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:35:59.876 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" HandleID="k8s-pod-network.dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005848d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5455bb578c-fhlbd", "timestamp":"2025-09-12 17:35:59.875910871 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:35:59.876 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.550 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.550 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.622 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" host="localhost" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.784 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.868 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.905 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.907 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.908 [INFO][4718] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" host="localhost" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.909 [INFO][4718] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3 Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.926 [INFO][4718] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" host="localhost" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.952 [INFO][4718] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" host="localhost" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.952 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" host="localhost" Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.952 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:02.187117 containerd[1461]: 2025-09-12 17:36:01.952 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" HandleID="k8s-pod-network.dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:36:02.187800 containerd[1461]: 2025-09-12 17:36:01.956 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-fhlbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0", GenerateName:"calico-apiserver-5455bb578c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e14f9bae-6755-476b-b594-70b724fc0885", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5455bb578c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5455bb578c-fhlbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali478ce5a866d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:02.187800 containerd[1461]: 2025-09-12 17:36:01.956 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-fhlbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:36:02.187800 containerd[1461]: 2025-09-12 17:36:01.956 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali478ce5a866d ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-fhlbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:36:02.187800 containerd[1461]: 2025-09-12 17:36:01.960 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-fhlbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:36:02.187800 containerd[1461]: 2025-09-12 17:36:01.961 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-fhlbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0", GenerateName:"calico-apiserver-5455bb578c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e14f9bae-6755-476b-b594-70b724fc0885", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5455bb578c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3", Pod:"calico-apiserver-5455bb578c-fhlbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali478ce5a866d", MAC:"c2:35:94:b0:e1:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:02.187800 containerd[1461]: 2025-09-12 17:36:02.184 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3" Namespace="calico-apiserver" Pod="calico-apiserver-5455bb578c-fhlbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:36:02.337029 systemd-networkd[1402]: califb80a0cac0b: Link UP Sep 12 17:36:02.337331 systemd-networkd[1402]: califb80a0cac0b: Gained carrier Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:01.341 [INFO][4806] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0 coredns-7c65d6cfc9- kube-system 36cb1f24-39ff-404a-a6eb-ecb4d0146f82 979 0 2025-09-12 17:35:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-km7sb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califb80a0cac0b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-km7sb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--km7sb-" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:01.342 [INFO][4806] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-km7sb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:01.419 [INFO][4885] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" HandleID="k8s-pod-network.44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:01.419 [INFO][4885] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" HandleID="k8s-pod-network.44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503280), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-km7sb", "timestamp":"2025-09-12 17:36:01.419583103 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:01.420 [INFO][4885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:01.962 [INFO][4885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:01.962 [INFO][4885] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:01.970 [INFO][4885] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" host="localhost" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.186 [INFO][4885] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.191 [INFO][4885] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.193 [INFO][4885] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.195 [INFO][4885] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.195 [INFO][4885] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" host="localhost" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.196 [INFO][4885] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6 Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.223 [INFO][4885] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" host="localhost" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.329 [INFO][4885] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" host="localhost" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.329 [INFO][4885] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" host="localhost" Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.329 [INFO][4885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:02.396432 containerd[1461]: 2025-09-12 17:36:02.329 [INFO][4885] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" HandleID="k8s-pod-network.44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:36:02.397579 containerd[1461]: 2025-09-12 17:36:02.333 [INFO][4806] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-km7sb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"36cb1f24-39ff-404a-a6eb-ecb4d0146f82", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-km7sb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb80a0cac0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:02.397579 containerd[1461]: 2025-09-12 17:36:02.333 [INFO][4806] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-km7sb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:36:02.397579 containerd[1461]: 2025-09-12 17:36:02.333 [INFO][4806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb80a0cac0b ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-km7sb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:36:02.397579 containerd[1461]: 2025-09-12 17:36:02.337 [INFO][4806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-km7sb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:36:02.397579 containerd[1461]: 2025-09-12 17:36:02.338 [INFO][4806] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-km7sb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"36cb1f24-39ff-404a-a6eb-ecb4d0146f82", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6", Pod:"coredns-7c65d6cfc9-km7sb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb80a0cac0b", MAC:"5a:4c:49:5f:74:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:02.397579 containerd[1461]: 2025-09-12 17:36:02.390 [INFO][4806] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-km7sb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:36:02.410374 containerd[1461]: time="2025-09-12T17:36:02.410267240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:02.410374 containerd[1461]: time="2025-09-12T17:36:02.410331484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:02.410374 containerd[1461]: time="2025-09-12T17:36:02.410346553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:02.410610 containerd[1461]: time="2025-09-12T17:36:02.410426989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:02.429615 containerd[1461]: time="2025-09-12T17:36:02.429566604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:36:02.429745 containerd[1461]: time="2025-09-12T17:36:02.429656788Z" level=info msg="RemovePodSandbox \"ffd04e399b57ab45ea97ac4434af0e2e6e8b68bf082ed314e3041e4b779845de\" returns successfully" Sep 12 17:36:02.436031 systemd[1]: Started cri-containerd-dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3.scope - libcontainer container dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3. Sep 12 17:36:02.456260 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:02.503502 containerd[1461]: time="2025-09-12T17:36:02.503445765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5455bb578c-fhlbd,Uid:e14f9bae-6755-476b-b594-70b724fc0885,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3\"" Sep 12 17:36:02.552014 systemd-networkd[1402]: calia3612bb6eb4: Gained IPv6LL Sep 12 17:36:02.567910 containerd[1461]: time="2025-09-12T17:36:02.567501665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:02.567910 containerd[1461]: time="2025-09-12T17:36:02.567577521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:02.567910 containerd[1461]: time="2025-09-12T17:36:02.567590707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:02.567910 containerd[1461]: time="2025-09-12T17:36:02.567671102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:02.576659 kubelet[2505]: E0912 17:36:02.576437 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:02.609042 systemd[1]: Started cri-containerd-44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6.scope - libcontainer container 44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6. Sep 12 17:36:02.624707 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:02.656733 containerd[1461]: time="2025-09-12T17:36:02.656579858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-km7sb,Uid:36cb1f24-39ff-404a-a6eb-ecb4d0146f82,Namespace:kube-system,Attempt:1,} returns sandbox id \"44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6\"" Sep 12 17:36:02.658438 kubelet[2505]: E0912 17:36:02.658380 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:02.663646 containerd[1461]: time="2025-09-12T17:36:02.663591135Z" level=info msg="CreateContainer within sandbox \"44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:36:02.687524 containerd[1461]: time="2025-09-12T17:36:02.687453822Z" level=info msg="CreateContainer within sandbox \"44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4980a493c03f514fbcdebbde705a511f02c9a6427f2dfca89b3694d4f1e76914\"" Sep 12 17:36:02.688180 containerd[1461]: time="2025-09-12T17:36:02.688138965Z" level=info msg="StartContainer for \"4980a493c03f514fbcdebbde705a511f02c9a6427f2dfca89b3694d4f1e76914\"" Sep 12 17:36:02.736985 systemd[1]: Started cri-containerd-4980a493c03f514fbcdebbde705a511f02c9a6427f2dfca89b3694d4f1e76914.scope - libcontainer container 4980a493c03f514fbcdebbde705a511f02c9a6427f2dfca89b3694d4f1e76914. Sep 12 17:36:02.879862 containerd[1461]: time="2025-09-12T17:36:02.879803633Z" level=info msg="StartContainer for \"4980a493c03f514fbcdebbde705a511f02c9a6427f2dfca89b3694d4f1e76914\" returns successfully" Sep 12 17:36:03.191988 systemd-networkd[1402]: cali478ce5a866d: Gained IPv6LL Sep 12 17:36:03.511984 systemd-networkd[1402]: cali86693cd83ab: Gained IPv6LL Sep 12 17:36:03.577887 containerd[1461]: time="2025-09-12T17:36:03.577842117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:03.579605 kubelet[2505]: E0912 17:36:03.579539 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:03.579605 kubelet[2505]: E0912 17:36:03.579539 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:03.591341 containerd[1461]: time="2025-09-12T17:36:03.591260555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:36:03.626036 containerd[1461]: time="2025-09-12T17:36:03.625944195Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:03.656022 containerd[1461]: time="2025-09-12T17:36:03.655958085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:03.656798 containerd[1461]: time="2025-09-12T17:36:03.656740123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.927712656s" Sep 12 17:36:03.656798 containerd[1461]: time="2025-09-12T17:36:03.656786682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:36:03.658226 containerd[1461]: time="2025-09-12T17:36:03.657855132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:36:03.659158 containerd[1461]: time="2025-09-12T17:36:03.659124399Z" level=info msg="CreateContainer within sandbox \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:36:03.764420 kubelet[2505]: I0912 17:36:03.762460 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-km7sb" podStartSLOduration=59.762440957 podStartE2EDuration="59.762440957s" podCreationTimestamp="2025-09-12 17:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:36:03.637046488 +0000 UTC m=+64.536425920" watchObservedRunningTime="2025-09-12 17:36:03.762440957 +0000 UTC m=+64.661820389" Sep 12 17:36:03.797284 containerd[1461]: time="2025-09-12T17:36:03.797211305Z" level=info msg="CreateContainer within sandbox \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\"" Sep 12 17:36:03.799663 containerd[1461]: time="2025-09-12T17:36:03.799395426Z" level=info msg="StartContainer for \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\"" Sep 12 17:36:03.847167 systemd[1]: Started cri-containerd-917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df.scope - libcontainer container 917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df. Sep 12 17:36:03.897066 systemd-networkd[1402]: califb80a0cac0b: Gained IPv6LL Sep 12 17:36:03.919335 containerd[1461]: time="2025-09-12T17:36:03.919263742Z" level=info msg="StartContainer for \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\" returns successfully" Sep 12 17:36:04.284423 containerd[1461]: time="2025-09-12T17:36:04.284370083Z" level=info msg="StopPodSandbox for \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\"" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.444 [INFO][5171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.444 [INFO][5171] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" iface="eth0" netns="/var/run/netns/cni-8541a85e-3992-db56-4e56-5297bad2e393" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.445 [INFO][5171] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" iface="eth0" netns="/var/run/netns/cni-8541a85e-3992-db56-4e56-5297bad2e393" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.445 [INFO][5171] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" iface="eth0" netns="/var/run/netns/cni-8541a85e-3992-db56-4e56-5297bad2e393" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.445 [INFO][5171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.445 [INFO][5171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.470 [INFO][5180] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" HandleID="k8s-pod-network.83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.470 [INFO][5180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.470 [INFO][5180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.560 [WARNING][5180] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" HandleID="k8s-pod-network.83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.560 [INFO][5180] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" HandleID="k8s-pod-network.83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.566 [INFO][5180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:04.573507 containerd[1461]: 2025-09-12 17:36:04.569 [INFO][5171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:36:04.574232 containerd[1461]: time="2025-09-12T17:36:04.573632878Z" level=info msg="TearDown network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\" successfully" Sep 12 17:36:04.574232 containerd[1461]: time="2025-09-12T17:36:04.573668357Z" level=info msg="StopPodSandbox for \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\" returns successfully" Sep 12 17:36:04.574485 containerd[1461]: time="2025-09-12T17:36:04.574445794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rwvqt,Uid:5d9c942d-8aa0-4a74-bc26-89c34879a081,Namespace:calico-system,Attempt:1,}" Sep 12 17:36:04.577399 systemd[1]: run-netns-cni\x2d8541a85e\x2d3992\x2ddb56\x2d4e56\x2d5297bad2e393.mount: Deactivated successfully. Sep 12 17:36:04.583571 kubelet[2505]: E0912 17:36:04.583537 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:04.584096 kubelet[2505]: E0912 17:36:04.583540 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:05.421060 systemd[1]: Started sshd@11-10.0.0.87:22-10.0.0.1:58476.service - OpenSSH per-connection server daemon (10.0.0.1:58476). Sep 12 17:36:05.465783 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 58476 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:05.467892 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:05.472393 systemd-logind[1445]: New session 12 of user core. Sep 12 17:36:05.480001 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:36:05.485673 systemd-networkd[1402]: cali9f89debc1a0: Link UP Sep 12 17:36:05.487325 systemd-networkd[1402]: cali9f89debc1a0: Gained carrier Sep 12 17:36:05.585397 kubelet[2505]: E0912 17:36:05.585355 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.368 [INFO][5191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--rwvqt-eth0 goldmane-7988f88666- calico-system 5d9c942d-8aa0-4a74-bc26-89c34879a081 1095 0 2025-09-12 17:35:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-rwvqt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9f89debc1a0 [] [] }} ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Namespace="calico-system" Pod="goldmane-7988f88666-rwvqt" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rwvqt-" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.368 [INFO][5191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Namespace="calico-system" Pod="goldmane-7988f88666-rwvqt" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.395 [INFO][5205] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" HandleID="k8s-pod-network.09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.396 [INFO][5205] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" HandleID="k8s-pod-network.09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002af010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-rwvqt", "timestamp":"2025-09-12 17:36:05.395906723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.396 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.396 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.396 [INFO][5205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.402 [INFO][5205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" host="localhost" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.407 [INFO][5205] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.410 [INFO][5205] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.412 [INFO][5205] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.417 [INFO][5205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.417 [INFO][5205] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" host="localhost" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.418 [INFO][5205] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.434 [INFO][5205] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" host="localhost" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.479 [INFO][5205] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" host="localhost" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.479 [INFO][5205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" host="localhost" Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.479 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:05.690040 containerd[1461]: 2025-09-12 17:36:05.479 [INFO][5205] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" HandleID="k8s-pod-network.09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:05.691237 containerd[1461]: 2025-09-12 17:36:05.483 [INFO][5191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Namespace="calico-system" Pod="goldmane-7988f88666-rwvqt" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--rwvqt-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"5d9c942d-8aa0-4a74-bc26-89c34879a081", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-rwvqt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f89debc1a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:05.691237 containerd[1461]: 2025-09-12 17:36:05.483 [INFO][5191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Namespace="calico-system" Pod="goldmane-7988f88666-rwvqt" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:05.691237 containerd[1461]: 2025-09-12 17:36:05.483 [INFO][5191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f89debc1a0 ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Namespace="calico-system" Pod="goldmane-7988f88666-rwvqt" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:05.691237 containerd[1461]: 2025-09-12 17:36:05.486 [INFO][5191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Namespace="calico-system" Pod="goldmane-7988f88666-rwvqt" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:05.691237 containerd[1461]: 2025-09-12 17:36:05.486 [INFO][5191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Namespace="calico-system" Pod="goldmane-7988f88666-rwvqt" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--rwvqt-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"5d9c942d-8aa0-4a74-bc26-89c34879a081", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec", Pod:"goldmane-7988f88666-rwvqt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f89debc1a0", MAC:"ea:5e:43:f0:de:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:05.691237 containerd[1461]: 2025-09-12 17:36:05.685 [INFO][5191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec" Namespace="calico-system" Pod="goldmane-7988f88666-rwvqt" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:36:05.776361 containerd[1461]: time="2025-09-12T17:36:05.775953198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:36:05.776361 containerd[1461]: time="2025-09-12T17:36:05.776082407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:36:05.776361 containerd[1461]: time="2025-09-12T17:36:05.776104220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:05.776361 containerd[1461]: time="2025-09-12T17:36:05.776238588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:36:05.797410 sshd[5215]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:05.801637 systemd[1]: Started cri-containerd-09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec.scope - libcontainer container 09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec. Sep 12 17:36:05.805608 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:36:05.806189 systemd[1]: sshd@11-10.0.0.87:22-10.0.0.1:58476.service: Deactivated successfully. Sep 12 17:36:05.809382 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:36:05.810891 systemd-logind[1445]: Removed session 12. Sep 12 17:36:05.821089 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:36:05.848492 containerd[1461]: time="2025-09-12T17:36:05.848452912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-rwvqt,Uid:5d9c942d-8aa0-4a74-bc26-89c34879a081,Namespace:calico-system,Attempt:1,} returns sandbox id \"09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec\"" Sep 12 17:36:06.776411 systemd-networkd[1402]: cali9f89debc1a0: Gained IPv6LL Sep 12 17:36:09.018357 containerd[1461]: time="2025-09-12T17:36:09.018273748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:09.054870 containerd[1461]: time="2025-09-12T17:36:09.054696728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:36:09.130579 containerd[1461]: time="2025-09-12T17:36:09.130501440Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:09.169068 containerd[1461]: time="2025-09-12T17:36:09.168968283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:09.169717 containerd[1461]: time="2025-09-12T17:36:09.169667245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.511778498s" Sep 12 17:36:09.169717 containerd[1461]: time="2025-09-12T17:36:09.169713354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:36:09.171151 containerd[1461]: time="2025-09-12T17:36:09.171112008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:36:09.174034 containerd[1461]: time="2025-09-12T17:36:09.173982138Z" level=info msg="CreateContainer within sandbox \"7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:36:09.589498 containerd[1461]: time="2025-09-12T17:36:09.589368516Z" level=info msg="CreateContainer within sandbox \"7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d7239d3af012bed8bf4b3c78f7a3f76da15cabe4420b716a324f86722f2e5b9\"" Sep 12 17:36:09.590591 containerd[1461]: time="2025-09-12T17:36:09.590444361Z" level=info msg="StartContainer for \"8d7239d3af012bed8bf4b3c78f7a3f76da15cabe4420b716a324f86722f2e5b9\"" Sep 12 17:36:09.649043 systemd[1]: Started cri-containerd-8d7239d3af012bed8bf4b3c78f7a3f76da15cabe4420b716a324f86722f2e5b9.scope - libcontainer container 8d7239d3af012bed8bf4b3c78f7a3f76da15cabe4420b716a324f86722f2e5b9. Sep 12 17:36:09.945244 containerd[1461]: time="2025-09-12T17:36:09.944990063Z" level=info msg="StartContainer for \"8d7239d3af012bed8bf4b3c78f7a3f76da15cabe4420b716a324f86722f2e5b9\" returns successfully" Sep 12 17:36:10.776977 kubelet[2505]: I0912 17:36:10.776729 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5455bb578c-4b9s4" podStartSLOduration=42.130107794 podStartE2EDuration="50.776708008s" podCreationTimestamp="2025-09-12 17:35:20 +0000 UTC" firstStartedPulling="2025-09-12 17:36:00.524293436 +0000 UTC m=+61.423672868" lastFinishedPulling="2025-09-12 17:36:09.17089364 +0000 UTC m=+70.070273082" observedRunningTime="2025-09-12 17:36:10.77601013 +0000 UTC m=+71.675389562" watchObservedRunningTime="2025-09-12 17:36:10.776708008 +0000 UTC m=+71.676087440" Sep 12 17:36:10.816781 systemd[1]: Started sshd@12-10.0.0.87:22-10.0.0.1:52928.service - OpenSSH per-connection server daemon (10.0.0.1:52928). Sep 12 17:36:10.885604 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 52928 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:10.888339 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:10.897388 systemd-logind[1445]: New session 13 of user core. Sep 12 17:36:10.903986 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:36:11.284656 kubelet[2505]: E0912 17:36:11.284591 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:11.361363 sshd[5344]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:11.366616 systemd[1]: sshd@12-10.0.0.87:22-10.0.0.1:52928.service: Deactivated successfully. Sep 12 17:36:11.369203 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:36:11.370482 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:36:11.371568 systemd-logind[1445]: Removed session 13. Sep 12 17:36:14.496795 containerd[1461]: time="2025-09-12T17:36:14.494825545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:14.806562 containerd[1461]: time="2025-09-12T17:36:14.806349190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:36:14.829027 containerd[1461]: time="2025-09-12T17:36:14.828955809Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:14.885071 containerd[1461]: time="2025-09-12T17:36:14.884961436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:14.886302 containerd[1461]: time="2025-09-12T17:36:14.886211628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 5.715051237s" Sep 12 17:36:14.886302 containerd[1461]: time="2025-09-12T17:36:14.886274649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:36:14.887943 containerd[1461]: time="2025-09-12T17:36:14.887859272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:36:14.889726 containerd[1461]: time="2025-09-12T17:36:14.889694926Z" level=info msg="CreateContainer within sandbox \"8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:36:15.343970 containerd[1461]: time="2025-09-12T17:36:15.343898692Z" level=info msg="CreateContainer within sandbox \"8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"241ba95d424ca6d4cb0d9d595365cdc4418f8d0ddf0189e3be01b8fd1f241aca\"" Sep 12 17:36:15.345477 containerd[1461]: time="2025-09-12T17:36:15.344802781Z" level=info msg="StartContainer for \"241ba95d424ca6d4cb0d9d595365cdc4418f8d0ddf0189e3be01b8fd1f241aca\"" Sep 12 17:36:15.405816 systemd[1]: run-containerd-runc-k8s.io-241ba95d424ca6d4cb0d9d595365cdc4418f8d0ddf0189e3be01b8fd1f241aca-runc.8rchki.mount: Deactivated successfully. Sep 12 17:36:15.420335 systemd[1]: Started cri-containerd-241ba95d424ca6d4cb0d9d595365cdc4418f8d0ddf0189e3be01b8fd1f241aca.scope - libcontainer container 241ba95d424ca6d4cb0d9d595365cdc4418f8d0ddf0189e3be01b8fd1f241aca. Sep 12 17:36:15.628668 containerd[1461]: time="2025-09-12T17:36:15.628602571Z" level=info msg="StartContainer for \"241ba95d424ca6d4cb0d9d595365cdc4418f8d0ddf0189e3be01b8fd1f241aca\" returns successfully" Sep 12 17:36:16.373712 systemd[1]: Started sshd@13-10.0.0.87:22-10.0.0.1:52940.service - OpenSSH per-connection server daemon (10.0.0.1:52940). Sep 12 17:36:16.470554 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 52940 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:16.472751 sshd[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:16.477697 systemd-logind[1445]: New session 14 of user core. Sep 12 17:36:16.482981 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:36:16.705845 sshd[5412]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:16.711914 systemd[1]: sshd@13-10.0.0.87:22-10.0.0.1:52940.service: Deactivated successfully. Sep 12 17:36:16.714732 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:36:16.715686 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:36:16.716708 systemd-logind[1445]: Removed session 14. Sep 12 17:36:17.284362 kubelet[2505]: E0912 17:36:17.284316 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:17.284970 kubelet[2505]: E0912 17:36:17.284467 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:20.154994 containerd[1461]: time="2025-09-12T17:36:20.154930837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:20.157886 containerd[1461]: time="2025-09-12T17:36:20.156569474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:36:20.159705 containerd[1461]: time="2025-09-12T17:36:20.159160749Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:20.166310 containerd[1461]: time="2025-09-12T17:36:20.166239528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:20.167239 containerd[1461]: time="2025-09-12T17:36:20.167154914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.279228123s" Sep 12 17:36:20.167239 containerd[1461]: time="2025-09-12T17:36:20.167225840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:36:20.169314 containerd[1461]: time="2025-09-12T17:36:20.169273718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:36:20.177946 containerd[1461]: time="2025-09-12T17:36:20.177905921Z" level=info msg="CreateContainer within sandbox \"876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:36:20.276728 containerd[1461]: time="2025-09-12T17:36:20.276653625Z" level=info msg="CreateContainer within sandbox \"876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"96dac13e9c27e836a0961d5807198f9381587db7f699b2f624fd4452f758642e\"" Sep 12 17:36:20.277433 containerd[1461]: time="2025-09-12T17:36:20.277390742Z" level=info msg="StartContainer for \"96dac13e9c27e836a0961d5807198f9381587db7f699b2f624fd4452f758642e\"" Sep 12 17:36:20.320973 systemd[1]: Started cri-containerd-96dac13e9c27e836a0961d5807198f9381587db7f699b2f624fd4452f758642e.scope - libcontainer container 96dac13e9c27e836a0961d5807198f9381587db7f699b2f624fd4452f758642e. Sep 12 17:36:20.393222 containerd[1461]: time="2025-09-12T17:36:20.393160689Z" level=info msg="StartContainer for \"96dac13e9c27e836a0961d5807198f9381587db7f699b2f624fd4452f758642e\" returns successfully" Sep 12 17:36:20.971434 containerd[1461]: time="2025-09-12T17:36:20.971344226Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:20.973170 containerd[1461]: time="2025-09-12T17:36:20.973115756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:36:20.976585 containerd[1461]: time="2025-09-12T17:36:20.976501888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 807.187804ms" Sep 12 17:36:20.976585 containerd[1461]: time="2025-09-12T17:36:20.976559418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:36:20.977668 containerd[1461]: time="2025-09-12T17:36:20.977633087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:36:20.979338 containerd[1461]: time="2025-09-12T17:36:20.979290850Z" level=info msg="CreateContainer within sandbox \"dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:36:20.996141 containerd[1461]: time="2025-09-12T17:36:20.996058729Z" level=info msg="CreateContainer within sandbox \"dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d0fee769d6007cfafa92e2dd6a17398aeedf7fae8a2ebaa956281f52912818b\"" Sep 12 17:36:20.998539 containerd[1461]: time="2025-09-12T17:36:20.997177394Z" level=info msg="StartContainer for \"8d0fee769d6007cfafa92e2dd6a17398aeedf7fae8a2ebaa956281f52912818b\"" Sep 12 17:36:21.027982 systemd[1]: Started cri-containerd-8d0fee769d6007cfafa92e2dd6a17398aeedf7fae8a2ebaa956281f52912818b.scope - libcontainer container 8d0fee769d6007cfafa92e2dd6a17398aeedf7fae8a2ebaa956281f52912818b. Sep 12 17:36:21.106860 containerd[1461]: time="2025-09-12T17:36:21.106810121Z" level=info msg="StartContainer for \"8d0fee769d6007cfafa92e2dd6a17398aeedf7fae8a2ebaa956281f52912818b\" returns successfully" Sep 12 17:36:21.675181 kubelet[2505]: I0912 17:36:21.675100 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b9b56c74c-m8j8k" podStartSLOduration=40.403819274 podStartE2EDuration="58.675078364s" podCreationTimestamp="2025-09-12 17:35:23 +0000 UTC" firstStartedPulling="2025-09-12 17:36:01.897094712 +0000 UTC m=+62.796474144" lastFinishedPulling="2025-09-12 17:36:20.168353802 +0000 UTC m=+81.067733234" observedRunningTime="2025-09-12 17:36:20.910450892 +0000 UTC m=+81.809830324" watchObservedRunningTime="2025-09-12 17:36:21.675078364 +0000 UTC m=+82.574457796" Sep 12 17:36:21.728546 systemd[1]: Started sshd@14-10.0.0.87:22-10.0.0.1:50084.service - OpenSSH per-connection server daemon (10.0.0.1:50084). Sep 12 17:36:21.735717 kubelet[2505]: I0912 17:36:21.735639 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5455bb578c-fhlbd" podStartSLOduration=43.263539806 podStartE2EDuration="1m1.7356162s" podCreationTimestamp="2025-09-12 17:35:20 +0000 UTC" firstStartedPulling="2025-09-12 17:36:02.505375027 +0000 UTC m=+63.404754459" lastFinishedPulling="2025-09-12 17:36:20.97745142 +0000 UTC m=+81.876830853" observedRunningTime="2025-09-12 17:36:21.675528884 +0000 UTC m=+82.574908316" watchObservedRunningTime="2025-09-12 17:36:21.7356162 +0000 UTC m=+82.634995632" Sep 12 17:36:21.795260 sshd[5595]: Accepted publickey for core from 10.0.0.1 port 50084 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:21.797236 sshd[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:21.802199 systemd-logind[1445]: New session 15 of user core. Sep 12 17:36:21.808973 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:36:21.965683 sshd[5595]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:21.976541 systemd[1]: sshd@14-10.0.0.87:22-10.0.0.1:50084.service: Deactivated successfully. Sep 12 17:36:21.979493 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:36:21.980315 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:36:21.993429 systemd[1]: Started sshd@15-10.0.0.87:22-10.0.0.1:50092.service - OpenSSH per-connection server daemon (10.0.0.1:50092). Sep 12 17:36:21.995531 systemd-logind[1445]: Removed session 15. Sep 12 17:36:22.037020 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 50092 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:22.040682 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:22.049308 systemd-logind[1445]: New session 16 of user core. Sep 12 17:36:22.056794 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:36:22.501266 sshd[5611]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:22.511379 systemd[1]: sshd@15-10.0.0.87:22-10.0.0.1:50092.service: Deactivated successfully. Sep 12 17:36:22.513855 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:36:22.516691 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:36:22.522206 systemd[1]: Started sshd@16-10.0.0.87:22-10.0.0.1:50098.service - OpenSSH per-connection server daemon (10.0.0.1:50098). Sep 12 17:36:22.523984 systemd-logind[1445]: Removed session 16. Sep 12 17:36:22.563842 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 50098 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:22.565899 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:22.570472 systemd-logind[1445]: New session 17 of user core. Sep 12 17:36:22.582974 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:36:22.650936 kubelet[2505]: I0912 17:36:22.650898 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:22.837143 sshd[5624]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:22.841710 systemd[1]: sshd@16-10.0.0.87:22-10.0.0.1:50098.service: Deactivated successfully. Sep 12 17:36:22.844991 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:36:22.846169 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:36:22.847436 systemd-logind[1445]: Removed session 17. Sep 12 17:36:25.489961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155777900.mount: Deactivated successfully. Sep 12 17:36:25.884091 containerd[1461]: time="2025-09-12T17:36:25.884016659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:25.886652 containerd[1461]: time="2025-09-12T17:36:25.886475802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:36:25.889616 containerd[1461]: time="2025-09-12T17:36:25.889340457Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:25.899330 containerd[1461]: time="2025-09-12T17:36:25.899251642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:25.900531 containerd[1461]: time="2025-09-12T17:36:25.900471395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.922795786s" Sep 12 17:36:25.900531 containerd[1461]: time="2025-09-12T17:36:25.900519075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:36:25.901963 containerd[1461]: time="2025-09-12T17:36:25.901913861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:36:25.904015 containerd[1461]: time="2025-09-12T17:36:25.903132412Z" level=info msg="CreateContainer within sandbox \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:36:26.359942 containerd[1461]: time="2025-09-12T17:36:26.359863970Z" level=info msg="CreateContainer within sandbox \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\"" Sep 12 17:36:26.361257 containerd[1461]: time="2025-09-12T17:36:26.361162902Z" level=info msg="StartContainer for \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\"" Sep 12 17:36:26.449077 systemd[1]: Started cri-containerd-29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4.scope - libcontainer container 29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4. Sep 12 17:36:26.517643 containerd[1461]: time="2025-09-12T17:36:26.517575674Z" level=info msg="StartContainer for \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\" returns successfully" Sep 12 17:36:26.690632 kubelet[2505]: I0912 17:36:26.690120 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6487485898-njsl8" podStartSLOduration=33.517248283 podStartE2EDuration="59.690099023s" podCreationTimestamp="2025-09-12 17:35:27 +0000 UTC" firstStartedPulling="2025-09-12 17:35:59.728706663 +0000 UTC m=+60.628086095" lastFinishedPulling="2025-09-12 17:36:25.901557413 +0000 UTC m=+86.800936835" observedRunningTime="2025-09-12 17:36:26.686416975 +0000 UTC m=+87.585796417" watchObservedRunningTime="2025-09-12 17:36:26.690099023 +0000 UTC m=+87.589478456" Sep 12 17:36:26.784178 containerd[1461]: time="2025-09-12T17:36:26.784090238Z" level=info msg="StopContainer for \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\" with timeout 30 (s)" Sep 12 17:36:26.791143 containerd[1461]: time="2025-09-12T17:36:26.791014449Z" level=info msg="Stop container \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\" with signal terminated" Sep 12 17:36:26.792445 containerd[1461]: time="2025-09-12T17:36:26.792410337Z" level=info msg="StopContainer for \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\" with timeout 30 (s)" Sep 12 17:36:26.792695 containerd[1461]: time="2025-09-12T17:36:26.792676513Z" level=info msg="Stop container \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\" with signal terminated" Sep 12 17:36:26.804034 systemd[1]: cri-containerd-29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4.scope: Deactivated successfully. Sep 12 17:36:26.816679 systemd[1]: cri-containerd-917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df.scope: Deactivated successfully. Sep 12 17:36:26.850929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df-rootfs.mount: Deactivated successfully. Sep 12 17:36:26.868142 containerd[1461]: time="2025-09-12T17:36:26.847580880Z" level=info msg="shim disconnected" id=917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df namespace=k8s.io Sep 12 17:36:26.876888 containerd[1461]: time="2025-09-12T17:36:26.876702938Z" level=warning msg="cleaning up after shim disconnected" id=917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df namespace=k8s.io Sep 12 17:36:26.876888 containerd[1461]: time="2025-09-12T17:36:26.876804331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:27.054113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4-rootfs.mount: Deactivated successfully. Sep 12 17:36:27.690100 containerd[1461]: time="2025-09-12T17:36:27.690028755Z" level=info msg="shim disconnected" id=29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4 namespace=k8s.io Sep 12 17:36:27.691438 containerd[1461]: time="2025-09-12T17:36:27.690090312Z" level=warning msg="cleaning up after shim disconnected" id=29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4 namespace=k8s.io Sep 12 17:36:27.691438 containerd[1461]: time="2025-09-12T17:36:27.690138884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:27.695156 containerd[1461]: time="2025-09-12T17:36:27.695107589Z" level=info msg="StopContainer for \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\" returns successfully" Sep 12 17:36:27.717953 containerd[1461]: time="2025-09-12T17:36:27.717889822Z" level=info msg="StopContainer for \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\" returns successfully" Sep 12 17:36:27.718589 containerd[1461]: time="2025-09-12T17:36:27.718550339Z" level=info msg="StopPodSandbox for \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\"" Sep 12 17:36:27.718640 containerd[1461]: time="2025-09-12T17:36:27.718613109Z" level=info msg="Container to stop \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:36:27.718640 containerd[1461]: time="2025-09-12T17:36:27.718630782Z" level=info msg="Container to stop \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:36:27.723213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f-shm.mount: Deactivated successfully. Sep 12 17:36:27.728307 systemd[1]: cri-containerd-8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f.scope: Deactivated successfully. Sep 12 17:36:27.756476 containerd[1461]: time="2025-09-12T17:36:27.756140501Z" level=info msg="shim disconnected" id=8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f namespace=k8s.io Sep 12 17:36:27.756476 containerd[1461]: time="2025-09-12T17:36:27.756316977Z" level=warning msg="cleaning up after shim disconnected" id=8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f namespace=k8s.io Sep 12 17:36:27.756476 containerd[1461]: time="2025-09-12T17:36:27.756330812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:36:27.758890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f-rootfs.mount: Deactivated successfully. Sep 12 17:36:27.855477 systemd[1]: Started sshd@17-10.0.0.87:22-10.0.0.1:50114.service - OpenSSH per-connection server daemon (10.0.0.1:50114). Sep 12 17:36:27.886008 systemd-networkd[1402]: cali4ced7edac32: Link DOWN Sep 12 17:36:27.886019 systemd-networkd[1402]: cali4ced7edac32: Lost carrier Sep 12 17:36:27.924896 sshd[5801]: Accepted publickey for core from 10.0.0.1 port 50114 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:27.928038 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:27.935969 systemd-logind[1445]: New session 18 of user core. Sep 12 17:36:27.945113 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.882 [INFO][5793] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.883 [INFO][5793] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" iface="eth0" netns="/var/run/netns/cni-7d2d897a-4904-83b6-ae5e-6e3d4144511b" Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.883 [INFO][5793] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" iface="eth0" netns="/var/run/netns/cni-7d2d897a-4904-83b6-ae5e-6e3d4144511b" Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.901 [INFO][5793] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" after=18.632846ms iface="eth0" netns="/var/run/netns/cni-7d2d897a-4904-83b6-ae5e-6e3d4144511b" Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.901 [INFO][5793] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.901 [INFO][5793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.935 [INFO][5809] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.935 [INFO][5809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.935 [INFO][5809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.987 [INFO][5809] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.987 [INFO][5809] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.989 [INFO][5809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:27.998805 containerd[1461]: 2025-09-12 17:36:27.993 [INFO][5793] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:36:27.998805 containerd[1461]: time="2025-09-12T17:36:27.998550288Z" level=info msg="TearDown network for sandbox \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" successfully" Sep 12 17:36:27.998805 containerd[1461]: time="2025-09-12T17:36:27.998588230Z" level=info msg="StopPodSandbox for \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" returns successfully" Sep 12 17:36:27.999857 containerd[1461]: time="2025-09-12T17:36:27.999375478Z" level=info msg="StopPodSandbox for \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\"" Sep 12 17:36:28.003388 systemd[1]: run-netns-cni\x2d7d2d897a\x2d4904\x2d83b6\x2dae5e\x2d6e3d4144511b.mount: Deactivated successfully. Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.058 [WARNING][5834] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6487485898--njsl8-eth0", GenerateName:"whisker-6487485898-", Namespace:"calico-system", SelfLink:"", UID:"14e6e1cd-60e7-4891-bfed-d3421afafc80", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6487485898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f", Pod:"whisker-6487485898-njsl8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4ced7edac32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.058 [INFO][5834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.058 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" iface="eth0" netns="" Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.058 [INFO][5834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.058 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.086 [INFO][5848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.086 [INFO][5848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.086 [INFO][5848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.388 [WARNING][5848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.388 [INFO][5848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.615 [INFO][5848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:36:28.622890 containerd[1461]: 2025-09-12 17:36:28.619 [INFO][5834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:36:28.623507 containerd[1461]: time="2025-09-12T17:36:28.622949812Z" level=info msg="TearDown network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\" successfully" Sep 12 17:36:28.623507 containerd[1461]: time="2025-09-12T17:36:28.622987153Z" level=info msg="StopPodSandbox for \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\" returns successfully" Sep 12 17:36:28.672173 kubelet[2505]: I0912 17:36:28.671790 2505 scope.go:117] "RemoveContainer" containerID="29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4" Sep 12 17:36:28.676567 containerd[1461]: time="2025-09-12T17:36:28.676505548Z" level=info msg="RemoveContainer for \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\"" Sep 12 17:36:28.788258 kubelet[2505]: I0912 17:36:28.788195 2505 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-974kd\" (UniqueName: \"kubernetes.io/projected/14e6e1cd-60e7-4891-bfed-d3421afafc80-kube-api-access-974kd\") pod \"14e6e1cd-60e7-4891-bfed-d3421afafc80\" (UID: \"14e6e1cd-60e7-4891-bfed-d3421afafc80\") " Sep 12 17:36:28.788258 kubelet[2505]: I0912 17:36:28.788273 2505 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e6e1cd-60e7-4891-bfed-d3421afafc80-whisker-ca-bundle\") pod \"14e6e1cd-60e7-4891-bfed-d3421afafc80\" (UID: \"14e6e1cd-60e7-4891-bfed-d3421afafc80\") " Sep 12 17:36:28.788455 kubelet[2505]: I0912 17:36:28.788301 2505 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14e6e1cd-60e7-4891-bfed-d3421afafc80-whisker-backend-key-pair\") pod \"14e6e1cd-60e7-4891-bfed-d3421afafc80\" (UID: \"14e6e1cd-60e7-4891-bfed-d3421afafc80\") " Sep 12 17:36:28.788967 kubelet[2505]: I0912 17:36:28.788909 2505 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14e6e1cd-60e7-4891-bfed-d3421afafc80-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "14e6e1cd-60e7-4891-bfed-d3421afafc80" (UID: "14e6e1cd-60e7-4891-bfed-d3421afafc80"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:36:28.797780 kubelet[2505]: I0912 17:36:28.795479 2505 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e6e1cd-60e7-4891-bfed-d3421afafc80-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "14e6e1cd-60e7-4891-bfed-d3421afafc80" (UID: "14e6e1cd-60e7-4891-bfed-d3421afafc80"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:36:28.797281 systemd[1]: var-lib-kubelet-pods-14e6e1cd\x2d60e7\x2d4891\x2dbfed\x2dd3421afafc80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d974kd.mount: Deactivated successfully. Sep 12 17:36:28.798190 kubelet[2505]: I0912 17:36:28.798105 2505 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e6e1cd-60e7-4891-bfed-d3421afafc80-kube-api-access-974kd" (OuterVolumeSpecName: "kube-api-access-974kd") pod "14e6e1cd-60e7-4891-bfed-d3421afafc80" (UID: "14e6e1cd-60e7-4891-bfed-d3421afafc80"). InnerVolumeSpecName "kube-api-access-974kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:36:28.801539 systemd[1]: var-lib-kubelet-pods-14e6e1cd\x2d60e7\x2d4891\x2dbfed\x2dd3421afafc80-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:36:28.965809 kubelet[2505]: I0912 17:36:28.964904 2505 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14e6e1cd-60e7-4891-bfed-d3421afafc80-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:36:28.965809 kubelet[2505]: I0912 17:36:28.964931 2505 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-974kd\" (UniqueName: \"kubernetes.io/projected/14e6e1cd-60e7-4891-bfed-d3421afafc80-kube-api-access-974kd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:36:28.965809 kubelet[2505]: I0912 17:36:28.964940 2505 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e6e1cd-60e7-4891-bfed-d3421afafc80-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:36:28.978530 systemd[1]: Removed slice kubepods-besteffort-pod14e6e1cd_60e7_4891_bfed_d3421afafc80.slice - libcontainer container kubepods-besteffort-pod14e6e1cd_60e7_4891_bfed_d3421afafc80.slice. Sep 12 17:36:29.448215 containerd[1461]: time="2025-09-12T17:36:29.448160163Z" level=info msg="RemoveContainer for \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\" returns successfully" Sep 12 17:36:29.449106 kubelet[2505]: I0912 17:36:29.448594 2505 scope.go:117] "RemoveContainer" containerID="917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df" Sep 12 17:36:29.450151 sshd[5801]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:29.450482 containerd[1461]: time="2025-09-12T17:36:29.450180895Z" level=info msg="RemoveContainer for \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\"" Sep 12 17:36:29.455554 systemd[1]: sshd@17-10.0.0.87:22-10.0.0.1:50114.service: Deactivated successfully. Sep 12 17:36:29.457896 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:36:29.458824 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:36:29.459798 systemd-logind[1445]: Removed session 18. Sep 12 17:36:30.495912 containerd[1461]: time="2025-09-12T17:36:30.495846898Z" level=info msg="RemoveContainer for \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\" returns successfully" Sep 12 17:36:30.496431 kubelet[2505]: I0912 17:36:30.496205 2505 scope.go:117] "RemoveContainer" containerID="29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4" Sep 12 17:36:30.595601 containerd[1461]: time="2025-09-12T17:36:30.501790287Z" level=error msg="ContainerStatus for \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\": not found" Sep 12 17:36:30.620373 kubelet[2505]: E0912 17:36:30.620322 2505 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\": not found" containerID="29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4" Sep 12 17:36:30.620529 kubelet[2505]: I0912 17:36:30.620385 2505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4"} err="failed to get container status \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\": not found" Sep 12 17:36:30.620529 kubelet[2505]: I0912 17:36:30.620425 2505 scope.go:117] "RemoveContainer" containerID="917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df" Sep 12 17:36:30.622398 containerd[1461]: time="2025-09-12T17:36:30.622336179Z" level=error msg="ContainerStatus for \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\": not found" Sep 12 17:36:30.623836 kubelet[2505]: E0912 17:36:30.622748 2505 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\": not found" containerID="917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df" Sep 12 17:36:30.623836 kubelet[2505]: I0912 17:36:30.622833 2505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df"} err="failed to get container status \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\": rpc error: code = NotFound desc = an error occurred when try to find container \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\": not found" Sep 12 17:36:30.623836 kubelet[2505]: I0912 17:36:30.622873 2505 scope.go:117] "RemoveContainer" containerID="29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4" Sep 12 17:36:30.624130 containerd[1461]: time="2025-09-12T17:36:30.623192386Z" level=error msg="ContainerStatus for \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\": not found" Sep 12 17:36:30.624185 kubelet[2505]: I0912 17:36:30.623832 2505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4"} err="failed to get container status \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"29d256a68883959abd823907c61446c3879c6a899510f0337f86874d11fde4b4\": not found" Sep 12 17:36:30.624185 kubelet[2505]: I0912 17:36:30.623880 2505 scope.go:117] "RemoveContainer" containerID="917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df" Sep 12 17:36:30.632926 containerd[1461]: time="2025-09-12T17:36:30.624290073Z" level=error msg="ContainerStatus for \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\": not found" Sep 12 17:36:30.633090 kubelet[2505]: I0912 17:36:30.624530 2505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df"} err="failed to get container status \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\": rpc error: code = NotFound desc = an error occurred when try to find container \"917103b0db9dfebd32f656a9c4c2df1911df96476b43b8c1a5a4554f6bdcd2df\": not found" Sep 12 17:36:31.287001 kubelet[2505]: I0912 17:36:31.286948 2505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e6e1cd-60e7-4891-bfed-d3421afafc80" path="/var/lib/kubelet/pods/14e6e1cd-60e7-4891-bfed-d3421afafc80/volumes" Sep 12 17:36:32.212998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681788626.mount: Deactivated successfully. Sep 12 17:36:32.831285 kubelet[2505]: I0912 17:36:32.831208 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:36:33.916701 containerd[1461]: time="2025-09-12T17:36:33.916546561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:33.920507 containerd[1461]: time="2025-09-12T17:36:33.920199820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:36:33.923724 containerd[1461]: time="2025-09-12T17:36:33.923611248Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:33.929686 containerd[1461]: time="2025-09-12T17:36:33.929478791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:33.930895 containerd[1461]: time="2025-09-12T17:36:33.930731952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 8.028767474s" Sep 12 17:36:33.930895 containerd[1461]: time="2025-09-12T17:36:33.930795252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:36:33.933316 containerd[1461]: time="2025-09-12T17:36:33.933261356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:36:33.934958 containerd[1461]: time="2025-09-12T17:36:33.934802332Z" level=info msg="CreateContainer within sandbox \"09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:36:33.973530 containerd[1461]: time="2025-09-12T17:36:33.973455260Z" level=info msg="CreateContainer within sandbox \"09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"624b9caec8c10fcf81e6fbbcdfa7bec5d1d2acc6f3323c99699c2f6047d4d55e\"" Sep 12 17:36:33.976998 containerd[1461]: time="2025-09-12T17:36:33.976004180Z" level=info msg="StartContainer for \"624b9caec8c10fcf81e6fbbcdfa7bec5d1d2acc6f3323c99699c2f6047d4d55e\"" Sep 12 17:36:34.045212 systemd[1]: Started cri-containerd-624b9caec8c10fcf81e6fbbcdfa7bec5d1d2acc6f3323c99699c2f6047d4d55e.scope - libcontainer container 624b9caec8c10fcf81e6fbbcdfa7bec5d1d2acc6f3323c99699c2f6047d4d55e. Sep 12 17:36:34.232475 containerd[1461]: time="2025-09-12T17:36:34.231850895Z" level=info msg="StartContainer for \"624b9caec8c10fcf81e6fbbcdfa7bec5d1d2acc6f3323c99699c2f6047d4d55e\" returns successfully" Sep 12 17:36:34.473465 systemd[1]: Started sshd@18-10.0.0.87:22-10.0.0.1:46006.service - OpenSSH per-connection server daemon (10.0.0.1:46006). Sep 12 17:36:34.600866 sshd[5923]: Accepted publickey for core from 10.0.0.1 port 46006 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:34.604202 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:34.612928 systemd-logind[1445]: New session 19 of user core. Sep 12 17:36:34.619138 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:36:35.340646 sshd[5923]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:35.346450 systemd[1]: sshd@18-10.0.0.87:22-10.0.0.1:46006.service: Deactivated successfully. Sep 12 17:36:35.349425 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:36:35.350449 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:36:35.351734 systemd-logind[1445]: Removed session 19. Sep 12 17:36:35.682910 systemd[1]: run-containerd-runc-k8s.io-624b9caec8c10fcf81e6fbbcdfa7bec5d1d2acc6f3323c99699c2f6047d4d55e-runc.CIakmW.mount: Deactivated successfully. Sep 12 17:36:36.886423 containerd[1461]: time="2025-09-12T17:36:36.886345929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:36.928486 containerd[1461]: time="2025-09-12T17:36:36.928386803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:36:36.979359 containerd[1461]: time="2025-09-12T17:36:36.979265997Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:37.007508 containerd[1461]: time="2025-09-12T17:36:37.007436429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:36:37.008429 containerd[1461]: time="2025-09-12T17:36:37.008398233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.075083496s" Sep 12 17:36:37.008499 containerd[1461]: time="2025-09-12T17:36:37.008435694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:36:37.010840 containerd[1461]: time="2025-09-12T17:36:37.010744685Z" level=info msg="CreateContainer within sandbox \"8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:36:37.284529 kubelet[2505]: E0912 17:36:37.284375 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:37.326923 containerd[1461]: time="2025-09-12T17:36:37.326673056Z" level=info msg="CreateContainer within sandbox \"8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"eb1d68379a0e8d38e7bb8feaa160a908db225bcb2d59fd356e55ece028b48bd8\"" Sep 12 17:36:37.328136 containerd[1461]: time="2025-09-12T17:36:37.328071279Z" level=info msg="StartContainer for \"eb1d68379a0e8d38e7bb8feaa160a908db225bcb2d59fd356e55ece028b48bd8\"" Sep 12 17:36:37.364071 systemd[1]: Started cri-containerd-eb1d68379a0e8d38e7bb8feaa160a908db225bcb2d59fd356e55ece028b48bd8.scope - libcontainer container eb1d68379a0e8d38e7bb8feaa160a908db225bcb2d59fd356e55ece028b48bd8. Sep 12 17:36:37.568283 containerd[1461]: time="2025-09-12T17:36:37.567946596Z" level=info msg="StartContainer for \"eb1d68379a0e8d38e7bb8feaa160a908db225bcb2d59fd356e55ece028b48bd8\" returns successfully" Sep 12 17:36:37.680027 kubelet[2505]: I0912 17:36:37.679958 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-rwvqt" podStartSLOduration=46.597415474 podStartE2EDuration="1m14.6799408s" podCreationTimestamp="2025-09-12 17:35:23 +0000 UTC" firstStartedPulling="2025-09-12 17:36:05.849651329 +0000 UTC m=+66.749030761" lastFinishedPulling="2025-09-12 17:36:33.932176655 +0000 UTC m=+94.831556087" observedRunningTime="2025-09-12 17:36:34.712595374 +0000 UTC m=+95.611974796" watchObservedRunningTime="2025-09-12 17:36:37.6799408 +0000 UTC m=+98.579320232" Sep 12 17:36:37.680375 kubelet[2505]: I0912 17:36:37.680341 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5kc8t" podStartSLOduration=39.104507289 podStartE2EDuration="1m14.680337092s" podCreationTimestamp="2025-09-12 17:35:23 +0000 UTC" firstStartedPulling="2025-09-12 17:36:01.433293395 +0000 UTC m=+62.332672827" lastFinishedPulling="2025-09-12 17:36:37.009123198 +0000 UTC m=+97.908502630" observedRunningTime="2025-09-12 17:36:37.679267012 +0000 UTC m=+98.578646434" watchObservedRunningTime="2025-09-12 17:36:37.680337092 +0000 UTC m=+98.579716524" Sep 12 17:36:38.424536 kubelet[2505]: I0912 17:36:38.424476 2505 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:36:38.424536 kubelet[2505]: I0912 17:36:38.424548 2505 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:36:40.354345 systemd[1]: Started sshd@19-10.0.0.87:22-10.0.0.1:47614.service - OpenSSH per-connection server daemon (10.0.0.1:47614). Sep 12 17:36:40.415223 sshd[6081]: Accepted publickey for core from 10.0.0.1 port 47614 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:40.418726 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:40.426464 systemd-logind[1445]: New session 20 of user core. Sep 12 17:36:40.433989 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:36:40.694101 sshd[6081]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:40.707526 systemd[1]: sshd@19-10.0.0.87:22-10.0.0.1:47614.service: Deactivated successfully. Sep 12 17:36:40.711561 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:36:40.714868 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:36:40.723517 systemd[1]: Started sshd@20-10.0.0.87:22-10.0.0.1:47620.service - OpenSSH per-connection server daemon (10.0.0.1:47620). Sep 12 17:36:40.725616 systemd-logind[1445]: Removed session 20. Sep 12 17:36:40.775750 sshd[6096]: Accepted publickey for core from 10.0.0.1 port 47620 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:40.778292 sshd[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:40.783909 systemd-logind[1445]: New session 21 of user core. Sep 12 17:36:40.792220 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:36:41.304650 sshd[6096]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:41.314524 systemd[1]: sshd@20-10.0.0.87:22-10.0.0.1:47620.service: Deactivated successfully. Sep 12 17:36:41.317711 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:36:41.319698 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:36:41.328221 systemd[1]: Started sshd@21-10.0.0.87:22-10.0.0.1:47626.service - OpenSSH per-connection server daemon (10.0.0.1:47626). Sep 12 17:36:41.329081 systemd-logind[1445]: Removed session 21. Sep 12 17:36:41.373720 sshd[6108]: Accepted publickey for core from 10.0.0.1 port 47626 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:41.375892 sshd[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:41.380353 systemd-logind[1445]: New session 22 of user core. Sep 12 17:36:41.385937 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:36:42.284890 kubelet[2505]: E0912 17:36:42.284421 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:36:44.122805 sshd[6108]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:44.133707 systemd[1]: sshd@21-10.0.0.87:22-10.0.0.1:47626.service: Deactivated successfully. Sep 12 17:36:44.136354 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:36:44.138875 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:36:44.144097 systemd[1]: Started sshd@22-10.0.0.87:22-10.0.0.1:47642.service - OpenSSH per-connection server daemon (10.0.0.1:47642). Sep 12 17:36:44.145844 systemd-logind[1445]: Removed session 22. Sep 12 17:36:44.185560 sshd[6127]: Accepted publickey for core from 10.0.0.1 port 47642 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:44.188728 sshd[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:44.203811 systemd-logind[1445]: New session 23 of user core. Sep 12 17:36:44.210959 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:36:45.036078 sshd[6127]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:45.047566 systemd[1]: sshd@22-10.0.0.87:22-10.0.0.1:47642.service: Deactivated successfully. Sep 12 17:36:45.049911 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:36:45.052109 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:36:45.057026 systemd[1]: Started sshd@23-10.0.0.87:22-10.0.0.1:47646.service - OpenSSH per-connection server daemon (10.0.0.1:47646). Sep 12 17:36:45.060322 systemd-logind[1445]: Removed session 23. Sep 12 17:36:45.113733 sshd[6139]: Accepted publickey for core from 10.0.0.1 port 47646 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:45.115568 sshd[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:45.124168 systemd-logind[1445]: New session 24 of user core. Sep 12 17:36:45.128959 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:36:45.366129 sshd[6139]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:45.371725 systemd-logind[1445]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:36:45.372411 systemd[1]: sshd@23-10.0.0.87:22-10.0.0.1:47646.service: Deactivated successfully. Sep 12 17:36:45.375082 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:36:45.376166 systemd-logind[1445]: Removed session 24. Sep 12 17:36:50.386062 systemd[1]: Started sshd@24-10.0.0.87:22-10.0.0.1:55310.service - OpenSSH per-connection server daemon (10.0.0.1:55310). Sep 12 17:36:50.424460 sshd[6180]: Accepted publickey for core from 10.0.0.1 port 55310 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:50.426445 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:50.431750 systemd-logind[1445]: New session 25 of user core. Sep 12 17:36:50.441056 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:36:50.615999 sshd[6180]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:50.621010 systemd[1]: sshd@24-10.0.0.87:22-10.0.0.1:55310.service: Deactivated successfully. Sep 12 17:36:50.623558 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:36:50.624429 systemd-logind[1445]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:36:50.625753 systemd-logind[1445]: Removed session 25. Sep 12 17:36:55.632583 systemd[1]: Started sshd@25-10.0.0.87:22-10.0.0.1:55318.service - OpenSSH per-connection server daemon (10.0.0.1:55318). Sep 12 17:36:55.683691 sshd[6198]: Accepted publickey for core from 10.0.0.1 port 55318 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:36:55.685787 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:55.690691 systemd-logind[1445]: New session 26 of user core. Sep 12 17:36:55.696961 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:36:55.838665 sshd[6198]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:55.843582 systemd[1]: sshd@25-10.0.0.87:22-10.0.0.1:55318.service: Deactivated successfully. Sep 12 17:36:55.845603 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:36:55.846502 systemd-logind[1445]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:36:55.847499 systemd-logind[1445]: Removed session 26. Sep 12 17:37:00.384824 update_engine[1451]: I20250912 17:37:00.384698 1451 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 17:37:00.384824 update_engine[1451]: I20250912 17:37:00.384816 1451 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 17:37:00.385974 update_engine[1451]: I20250912 17:37:00.385848 1451 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 17:37:00.386654 update_engine[1451]: I20250912 17:37:00.386626 1451 omaha_request_params.cc:62] Current group set to lts Sep 12 17:37:00.386820 update_engine[1451]: I20250912 17:37:00.386793 1451 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 17:37:00.386820 update_engine[1451]: I20250912 17:37:00.386808 1451 update_attempter.cc:643] Scheduling an action processor start. Sep 12 17:37:00.386956 update_engine[1451]: I20250912 17:37:00.386830 1451 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:37:00.386956 update_engine[1451]: I20250912 17:37:00.386883 1451 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 17:37:00.386956 update_engine[1451]: I20250912 17:37:00.386945 1451 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:37:00.387032 update_engine[1451]: I20250912 17:37:00.386955 1451 omaha_request_action.cc:272] Request: Sep 12 17:37:00.387032 update_engine[1451]: Sep 12 17:37:00.387032 update_engine[1451]: Sep 12 17:37:00.387032 update_engine[1451]: Sep 12 17:37:00.387032 update_engine[1451]: Sep 12 17:37:00.387032 update_engine[1451]: Sep 12 17:37:00.387032 update_engine[1451]: Sep 12 17:37:00.387032 update_engine[1451]: Sep 12 17:37:00.387032 update_engine[1451]: Sep 12 17:37:00.387032 update_engine[1451]: I20250912 17:37:00.386963 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:37:00.393462 update_engine[1451]: I20250912 17:37:00.393410 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:37:00.393877 update_engine[1451]: I20250912 17:37:00.393739 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:37:00.398182 locksmithd[1474]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 17:37:00.401989 update_engine[1451]: E20250912 17:37:00.401932 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:37:00.402066 update_engine[1451]: I20250912 17:37:00.402036 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 17:37:00.849015 systemd[1]: Started sshd@26-10.0.0.87:22-10.0.0.1:36502.service - OpenSSH per-connection server daemon (10.0.0.1:36502). Sep 12 17:37:00.890278 sshd[6214]: Accepted publickey for core from 10.0.0.1 port 36502 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:00.892175 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:00.896417 systemd-logind[1445]: New session 27 of user core. Sep 12 17:37:00.904917 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:37:01.021434 sshd[6214]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:01.026858 systemd[1]: sshd@26-10.0.0.87:22-10.0.0.1:36502.service: Deactivated successfully. Sep 12 17:37:01.029227 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:37:01.030155 systemd-logind[1445]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:37:01.031477 systemd-logind[1445]: Removed session 27. Sep 12 17:37:02.526934 containerd[1461]: time="2025-09-12T17:37:02.526855596Z" level=info msg="StopPodSandbox for \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\"" Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.642 [WARNING][6237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0", GenerateName:"calico-apiserver-5455bb578c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6d04b79-edb5-447a-a20c-db3b318d7074", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5455bb578c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6", Pod:"calico-apiserver-5455bb578c-4b9s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68726cad122", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.644 [INFO][6237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.644 [INFO][6237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" iface="eth0" netns="" Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.644 [INFO][6237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.644 [INFO][6237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.772 [INFO][6245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" HandleID="k8s-pod-network.c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.774 [INFO][6245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.774 [INFO][6245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.782 [WARNING][6245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" HandleID="k8s-pod-network.c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.782 [INFO][6245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" HandleID="k8s-pod-network.c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.788 [INFO][6245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:02.797319 containerd[1461]: 2025-09-12 17:37:02.792 [INFO][6237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:37:02.808107 containerd[1461]: time="2025-09-12T17:37:02.807960230Z" level=info msg="TearDown network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\" successfully" Sep 12 17:37:02.808107 containerd[1461]: time="2025-09-12T17:37:02.808041303Z" level=info msg="StopPodSandbox for \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\" returns successfully" Sep 12 17:37:02.808784 containerd[1461]: time="2025-09-12T17:37:02.808714895Z" level=info msg="RemovePodSandbox for \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\"" Sep 12 17:37:02.808872 containerd[1461]: time="2025-09-12T17:37:02.808782062Z" level=info msg="Forcibly stopping sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\"" Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.921 [WARNING][6263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0", GenerateName:"calico-apiserver-5455bb578c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6d04b79-edb5-447a-a20c-db3b318d7074", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5455bb578c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d99e274141b5bd01aee7e9a9fdb50cda58180862cb05f2f929088820e6fa8e6", Pod:"calico-apiserver-5455bb578c-4b9s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali68726cad122", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.922 [INFO][6263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.922 [INFO][6263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" iface="eth0" netns="" Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.922 [INFO][6263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.922 [INFO][6263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.945 [INFO][6272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" HandleID="k8s-pod-network.c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.946 [INFO][6272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.946 [INFO][6272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.954 [WARNING][6272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" HandleID="k8s-pod-network.c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.954 [INFO][6272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" HandleID="k8s-pod-network.c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Workload="localhost-k8s-calico--apiserver--5455bb578c--4b9s4-eth0" Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.959 [INFO][6272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:02.967637 containerd[1461]: 2025-09-12 17:37:02.963 [INFO][6263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c" Sep 12 17:37:02.968245 containerd[1461]: time="2025-09-12T17:37:02.967704323Z" level=info msg="TearDown network for sandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\" successfully" Sep 12 17:37:02.978466 containerd[1461]: time="2025-09-12T17:37:02.978319428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:02.978466 containerd[1461]: time="2025-09-12T17:37:02.978458080Z" level=info msg="RemovePodSandbox \"c5844f139718f6c5e108d64f390dde57bc2b6682cc32f95287480b9df11dd88c\" returns successfully" Sep 12 17:37:02.979354 containerd[1461]: time="2025-09-12T17:37:02.979265595Z" level=info msg="StopPodSandbox for \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\"" Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.033 [WARNING][6291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--rwvqt-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"5d9c942d-8aa0-4a74-bc26-89c34879a081", ResourceVersion:"1315", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec", Pod:"goldmane-7988f88666-rwvqt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f89debc1a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.034 [INFO][6291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.034 [INFO][6291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" iface="eth0" netns="" Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.034 [INFO][6291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.034 [INFO][6291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.059 [INFO][6300] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" HandleID="k8s-pod-network.83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.059 [INFO][6300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.059 [INFO][6300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.065 [WARNING][6300] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" HandleID="k8s-pod-network.83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.066 [INFO][6300] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" HandleID="k8s-pod-network.83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.070 [INFO][6300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:03.077358 containerd[1461]: 2025-09-12 17:37:03.073 [INFO][6291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:37:03.077358 containerd[1461]: time="2025-09-12T17:37:03.077310962Z" level=info msg="TearDown network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\" successfully" Sep 12 17:37:03.077358 containerd[1461]: time="2025-09-12T17:37:03.077351208Z" level=info msg="StopPodSandbox for \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\" returns successfully" Sep 12 17:37:03.078419 containerd[1461]: time="2025-09-12T17:37:03.077966109Z" level=info msg="RemovePodSandbox for \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\"" Sep 12 17:37:03.078419 containerd[1461]: time="2025-09-12T17:37:03.078002107Z" level=info msg="Forcibly stopping sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\"" Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.181 [WARNING][6318] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--rwvqt-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"5d9c942d-8aa0-4a74-bc26-89c34879a081", ResourceVersion:"1315", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09009379a01ce3f1d5683b5602bd11a6c421c3f4ecf7fa808f6159cd352249ec", Pod:"goldmane-7988f88666-rwvqt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f89debc1a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.182 [INFO][6318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.182 [INFO][6318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" iface="eth0" netns="" Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.182 [INFO][6318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.182 [INFO][6318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.217 [INFO][6326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" HandleID="k8s-pod-network.83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.218 [INFO][6326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.218 [INFO][6326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.224 [WARNING][6326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" HandleID="k8s-pod-network.83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.224 [INFO][6326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" HandleID="k8s-pod-network.83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Workload="localhost-k8s-goldmane--7988f88666--rwvqt-eth0" Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.227 [INFO][6326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:03.239300 containerd[1461]: 2025-09-12 17:37:03.232 [INFO][6318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9" Sep 12 17:37:03.241973 containerd[1461]: time="2025-09-12T17:37:03.240254331Z" level=info msg="TearDown network for sandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\" successfully" Sep 12 17:37:03.246590 containerd[1461]: time="2025-09-12T17:37:03.246505363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:03.246705 containerd[1461]: time="2025-09-12T17:37:03.246609149Z" level=info msg="RemovePodSandbox \"83f7d25a4e46f5c489465034da823aa42cd54ea547b63865690634c9f82f84d9\" returns successfully" Sep 12 17:37:03.247686 containerd[1461]: time="2025-09-12T17:37:03.247284794Z" level=info msg="StopPodSandbox for \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\"" Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.302 [WARNING][6343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5kc8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33030e82-9043-4dea-9a42-6edffd5b404a", ResourceVersion:"1333", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a", Pod:"csi-node-driver-5kc8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3612bb6eb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.302 [INFO][6343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.302 [INFO][6343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" iface="eth0" netns="" Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.302 [INFO][6343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.302 [INFO][6343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.330 [INFO][6352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" HandleID="k8s-pod-network.2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.330 [INFO][6352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.330 [INFO][6352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.340 [WARNING][6352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" HandleID="k8s-pod-network.2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.340 [INFO][6352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" HandleID="k8s-pod-network.2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.342 [INFO][6352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:03.352796 containerd[1461]: 2025-09-12 17:37:03.346 [INFO][6343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:37:03.352796 containerd[1461]: time="2025-09-12T17:37:03.352307911Z" level=info msg="TearDown network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\" successfully" Sep 12 17:37:03.352796 containerd[1461]: time="2025-09-12T17:37:03.352378034Z" level=info msg="StopPodSandbox for \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\" returns successfully" Sep 12 17:37:03.353368 containerd[1461]: time="2025-09-12T17:37:03.353254970Z" level=info msg="RemovePodSandbox for \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\"" Sep 12 17:37:03.353368 containerd[1461]: time="2025-09-12T17:37:03.353289885Z" level=info msg="Forcibly stopping sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\"" Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.399 [WARNING][6369] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5kc8t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"33030e82-9043-4dea-9a42-6edffd5b404a", ResourceVersion:"1333", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fbe011eaf259ec6b85071c647574e8e198473cfe87bfe262724348ec1fe393a", Pod:"csi-node-driver-5kc8t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3612bb6eb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.399 [INFO][6369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.399 [INFO][6369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" iface="eth0" netns="" Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.399 [INFO][6369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.399 [INFO][6369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.435 [INFO][6378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" HandleID="k8s-pod-network.2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.436 [INFO][6378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.436 [INFO][6378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.442 [WARNING][6378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" HandleID="k8s-pod-network.2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.443 [INFO][6378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" HandleID="k8s-pod-network.2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Workload="localhost-k8s-csi--node--driver--5kc8t-eth0" Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.446 [INFO][6378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:03.454063 containerd[1461]: 2025-09-12 17:37:03.449 [INFO][6369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c" Sep 12 17:37:03.454663 containerd[1461]: time="2025-09-12T17:37:03.454125326Z" level=info msg="TearDown network for sandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\" successfully" Sep 12 17:37:03.518932 containerd[1461]: time="2025-09-12T17:37:03.518826520Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:03.519290 containerd[1461]: time="2025-09-12T17:37:03.519031738Z" level=info msg="RemovePodSandbox \"2931aa156df9e887f810ce3dfd9bedbec9712ede0aa887bbfab1f2cb3339269c\" returns successfully" Sep 12 17:37:03.520257 containerd[1461]: time="2025-09-12T17:37:03.519800089Z" level=info msg="StopPodSandbox for \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\"" Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.572 [WARNING][6395] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"36cb1f24-39ff-404a-a6eb-ecb4d0146f82", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6", Pod:"coredns-7c65d6cfc9-km7sb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb80a0cac0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.572 [INFO][6395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.572 [INFO][6395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" iface="eth0" netns="" Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.572 [INFO][6395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.572 [INFO][6395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.599 [INFO][6403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" HandleID="k8s-pod-network.665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.599 [INFO][6403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.599 [INFO][6403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.606 [WARNING][6403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" HandleID="k8s-pod-network.665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.606 [INFO][6403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" HandleID="k8s-pod-network.665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.607 [INFO][6403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:03.615580 containerd[1461]: 2025-09-12 17:37:03.611 [INFO][6395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:37:03.615580 containerd[1461]: time="2025-09-12T17:37:03.615533055Z" level=info msg="TearDown network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\" successfully" Sep 12 17:37:03.615580 containerd[1461]: time="2025-09-12T17:37:03.615572178Z" level=info msg="StopPodSandbox for \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\" returns successfully" Sep 12 17:37:03.616571 containerd[1461]: time="2025-09-12T17:37:03.616372490Z" level=info msg="RemovePodSandbox for \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\"" Sep 12 17:37:03.616571 containerd[1461]: time="2025-09-12T17:37:03.616401204Z" level=info msg="Forcibly stopping sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\"" Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.691 [WARNING][6420] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"36cb1f24-39ff-404a-a6eb-ecb4d0146f82", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44ed13130541bf093e09785ee95bc099683d72cc0468e30596709ae97c3c37c6", Pod:"coredns-7c65d6cfc9-km7sb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb80a0cac0b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.691 [INFO][6420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.691 [INFO][6420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" iface="eth0" netns="" Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.691 [INFO][6420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.691 [INFO][6420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.749 [INFO][6429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" HandleID="k8s-pod-network.665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.750 [INFO][6429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.750 [INFO][6429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.759 [WARNING][6429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" HandleID="k8s-pod-network.665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.759 [INFO][6429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" HandleID="k8s-pod-network.665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Workload="localhost-k8s-coredns--7c65d6cfc9--km7sb-eth0" Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.764 [INFO][6429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:03.772852 containerd[1461]: 2025-09-12 17:37:03.768 [INFO][6420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49" Sep 12 17:37:03.773902 containerd[1461]: time="2025-09-12T17:37:03.773708881Z" level=info msg="TearDown network for sandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\" successfully" Sep 12 17:37:03.782152 containerd[1461]: time="2025-09-12T17:37:03.782102750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:03.782152 containerd[1461]: time="2025-09-12T17:37:03.782151812Z" level=info msg="RemovePodSandbox \"665bd8305abc7abb4ba9c6b5fd86809810bf1bac2b7f150e1916d5acf46c9d49\" returns successfully" Sep 12 17:37:03.782484 containerd[1461]: time="2025-09-12T17:37:03.782450676Z" level=info msg="StopPodSandbox for \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\"" Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.826 [WARNING][6447] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0", GenerateName:"calico-apiserver-5455bb578c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e14f9bae-6755-476b-b594-70b724fc0885", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5455bb578c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3", Pod:"calico-apiserver-5455bb578c-fhlbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali478ce5a866d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.827 [INFO][6447] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.827 [INFO][6447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" iface="eth0" netns="" Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.827 [INFO][6447] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.827 [INFO][6447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.858 [INFO][6456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" HandleID="k8s-pod-network.f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.858 [INFO][6456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.858 [INFO][6456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.866 [WARNING][6456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" HandleID="k8s-pod-network.f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.866 [INFO][6456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" HandleID="k8s-pod-network.f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.868 [INFO][6456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:03.876424 containerd[1461]: 2025-09-12 17:37:03.872 [INFO][6447] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:37:03.876424 containerd[1461]: time="2025-09-12T17:37:03.876100810Z" level=info msg="TearDown network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\" successfully" Sep 12 17:37:03.876424 containerd[1461]: time="2025-09-12T17:37:03.876158880Z" level=info msg="StopPodSandbox for \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\" returns successfully" Sep 12 17:37:03.877647 containerd[1461]: time="2025-09-12T17:37:03.877251583Z" level=info msg="RemovePodSandbox for \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\"" Sep 12 17:37:03.877647 containerd[1461]: time="2025-09-12T17:37:03.877308480Z" level=info msg="Forcibly stopping sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\"" Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.924 [WARNING][6474] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0", GenerateName:"calico-apiserver-5455bb578c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e14f9bae-6755-476b-b594-70b724fc0885", ResourceVersion:"1289", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5455bb578c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbe4fdc34958d452ce764c46519b781b24b928ba1852ffc28ecc90193ae1aca3", Pod:"calico-apiserver-5455bb578c-fhlbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali478ce5a866d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.925 [INFO][6474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.925 [INFO][6474] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" iface="eth0" netns="" Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.925 [INFO][6474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.925 [INFO][6474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.950 [INFO][6483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" HandleID="k8s-pod-network.f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.950 [INFO][6483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.950 [INFO][6483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.958 [WARNING][6483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" HandleID="k8s-pod-network.f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.958 [INFO][6483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" HandleID="k8s-pod-network.f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Workload="localhost-k8s-calico--apiserver--5455bb578c--fhlbd-eth0" Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.960 [INFO][6483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:03.968735 containerd[1461]: 2025-09-12 17:37:03.963 [INFO][6474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41" Sep 12 17:37:03.968735 containerd[1461]: time="2025-09-12T17:37:03.968698857Z" level=info msg="TearDown network for sandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\" successfully" Sep 12 17:37:03.973786 containerd[1461]: time="2025-09-12T17:37:03.973716290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:03.974081 containerd[1461]: time="2025-09-12T17:37:03.974041865Z" level=info msg="RemovePodSandbox \"f1bde591529b384a9385bee8087a5dcda27b635d87e8d7d53c1066f95cbb4f41\" returns successfully" Sep 12 17:37:03.974735 containerd[1461]: time="2025-09-12T17:37:03.974682545Z" level=info msg="StopPodSandbox for \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\"" Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.017 [WARNING][6500] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0", GenerateName:"calico-kube-controllers-b9b56c74c-", Namespace:"calico-system", SelfLink:"", UID:"2ee3e697-bf88-4b68-b880-613542cf53e7", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9b56c74c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f", Pod:"calico-kube-controllers-b9b56c74c-m8j8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86693cd83ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.017 [INFO][6500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.017 [INFO][6500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" iface="eth0" netns="" Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.017 [INFO][6500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.017 [INFO][6500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.048 [INFO][6509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" HandleID="k8s-pod-network.b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.048 [INFO][6509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.048 [INFO][6509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.058 [WARNING][6509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" HandleID="k8s-pod-network.b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.058 [INFO][6509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" HandleID="k8s-pod-network.b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.060 [INFO][6509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:04.069109 containerd[1461]: 2025-09-12 17:37:04.065 [INFO][6500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:37:04.084749 containerd[1461]: time="2025-09-12T17:37:04.069161218Z" level=info msg="TearDown network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\" successfully" Sep 12 17:37:04.084749 containerd[1461]: time="2025-09-12T17:37:04.069191024Z" level=info msg="StopPodSandbox for \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\" returns successfully" Sep 12 17:37:04.084749 containerd[1461]: time="2025-09-12T17:37:04.069829159Z" level=info msg="RemovePodSandbox for \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\"" Sep 12 17:37:04.084749 containerd[1461]: time="2025-09-12T17:37:04.069869255Z" level=info msg="Forcibly stopping sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\"" Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.112 [WARNING][6528] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0", GenerateName:"calico-kube-controllers-b9b56c74c-", Namespace:"calico-system", SelfLink:"", UID:"2ee3e697-bf88-4b68-b880-613542cf53e7", ResourceVersion:"1195", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b9b56c74c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"876a47fd9284269593decf26b82392dadf720052dc29e28e5ad09b773f70520f", Pod:"calico-kube-controllers-b9b56c74c-m8j8k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali86693cd83ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.112 [INFO][6528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.112 [INFO][6528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" iface="eth0" netns="" Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.112 [INFO][6528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.112 [INFO][6528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.139 [INFO][6537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" HandleID="k8s-pod-network.b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.139 [INFO][6537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.139 [INFO][6537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.145 [WARNING][6537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" HandleID="k8s-pod-network.b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.145 [INFO][6537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" HandleID="k8s-pod-network.b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Workload="localhost-k8s-calico--kube--controllers--b9b56c74c--m8j8k-eth0" Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.146 [INFO][6537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:04.152615 containerd[1461]: 2025-09-12 17:37:04.149 [INFO][6528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60" Sep 12 17:37:04.152615 containerd[1461]: time="2025-09-12T17:37:04.152587580Z" level=info msg="TearDown network for sandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\" successfully" Sep 12 17:37:04.157448 containerd[1461]: time="2025-09-12T17:37:04.157379126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:04.157531 containerd[1461]: time="2025-09-12T17:37:04.157500555Z" level=info msg="RemovePodSandbox \"b6d4b4328e9243db0324e17c82201115348d52963b68fda834d4926f56250f60\" returns successfully" Sep 12 17:37:04.158131 containerd[1461]: time="2025-09-12T17:37:04.158096860Z" level=info msg="StopPodSandbox for \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\"" Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.208 [WARNING][6554] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.209 [INFO][6554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.209 [INFO][6554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" iface="eth0" netns="" Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.209 [INFO][6554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.209 [INFO][6554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.241 [INFO][6563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.242 [INFO][6563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.242 [INFO][6563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.250 [WARNING][6563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.251 [INFO][6563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.252 [INFO][6563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:04.260261 containerd[1461]: 2025-09-12 17:37:04.256 [INFO][6554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:37:04.260261 containerd[1461]: time="2025-09-12T17:37:04.260262310Z" level=info msg="TearDown network for sandbox \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" successfully" Sep 12 17:37:04.260844 containerd[1461]: time="2025-09-12T17:37:04.260293028Z" level=info msg="StopPodSandbox for \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" returns successfully" Sep 12 17:37:04.261743 containerd[1461]: time="2025-09-12T17:37:04.261015713Z" level=info msg="RemovePodSandbox for \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\"" Sep 12 17:37:04.261743 containerd[1461]: time="2025-09-12T17:37:04.261062681Z" level=info msg="Forcibly stopping sandbox \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\"" Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.317 [WARNING][6579] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.318 [INFO][6579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.318 [INFO][6579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" iface="eth0" netns="" Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.318 [INFO][6579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.318 [INFO][6579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.375 [INFO][6587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.375 [INFO][6587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.375 [INFO][6587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.383 [WARNING][6587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.383 [INFO][6587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" HandleID="k8s-pod-network.8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.386 [INFO][6587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:04.393665 containerd[1461]: 2025-09-12 17:37:04.389 [INFO][6579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f" Sep 12 17:37:04.394479 containerd[1461]: time="2025-09-12T17:37:04.393714506Z" level=info msg="TearDown network for sandbox \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" successfully" Sep 12 17:37:04.406147 containerd[1461]: time="2025-09-12T17:37:04.405926391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:04.406147 containerd[1461]: time="2025-09-12T17:37:04.406036790Z" level=info msg="RemovePodSandbox \"8b57ab009cf5bfe240e8dcdc479fcd24e72bc0130c85093eef3fb1b7eb19263f\" returns successfully" Sep 12 17:37:04.406818 containerd[1461]: time="2025-09-12T17:37:04.406773620Z" level=info msg="StopPodSandbox for \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\"" Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.463 [WARNING][6604] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.464 [INFO][6604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.464 [INFO][6604] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" iface="eth0" netns="" Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.464 [INFO][6604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.464 [INFO][6604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.493 [INFO][6613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.493 [INFO][6613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.494 [INFO][6613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.500 [WARNING][6613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.501 [INFO][6613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.504 [INFO][6613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:04.511676 containerd[1461]: 2025-09-12 17:37:04.507 [INFO][6604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:37:04.512589 containerd[1461]: time="2025-09-12T17:37:04.511814869Z" level=info msg="TearDown network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\" successfully" Sep 12 17:37:04.512589 containerd[1461]: time="2025-09-12T17:37:04.511848152Z" level=info msg="StopPodSandbox for \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\" returns successfully" Sep 12 17:37:04.512589 containerd[1461]: time="2025-09-12T17:37:04.512561900Z" level=info msg="RemovePodSandbox for \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\"" Sep 12 17:37:04.512710 containerd[1461]: time="2025-09-12T17:37:04.512599069Z" level=info msg="Forcibly stopping sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\"" Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.584 [WARNING][6631] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" WorkloadEndpoint="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.584 [INFO][6631] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.584 [INFO][6631] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" iface="eth0" netns="" Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.584 [INFO][6631] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.584 [INFO][6631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.618 [INFO][6640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.618 [INFO][6640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.619 [INFO][6640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.626 [WARNING][6640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.626 [INFO][6640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" HandleID="k8s-pod-network.93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Workload="localhost-k8s-whisker--6487485898--njsl8-eth0" Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.628 [INFO][6640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:37:04.636439 containerd[1461]: 2025-09-12 17:37:04.632 [INFO][6631] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf" Sep 12 17:37:04.637418 containerd[1461]: time="2025-09-12T17:37:04.636504242Z" level=info msg="TearDown network for sandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\" successfully" Sep 12 17:37:04.641698 containerd[1461]: time="2025-09-12T17:37:04.641607967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:37:04.641787 containerd[1461]: time="2025-09-12T17:37:04.641715891Z" level=info msg="RemovePodSandbox \"93a5af348f806fe8001a664699ebcda75f725eb2f2f0f6415b4ca4c51efd16bf\" returns successfully" Sep 12 17:37:05.386043 systemd[1]: run-containerd-runc-k8s.io-624b9caec8c10fcf81e6fbbcdfa7bec5d1d2acc6f3323c99699c2f6047d4d55e-runc.e6asCh.mount: Deactivated successfully. Sep 12 17:37:06.036261 systemd[1]: Started sshd@27-10.0.0.87:22-10.0.0.1:36510.service - OpenSSH per-connection server daemon (10.0.0.1:36510). Sep 12 17:37:06.078196 sshd[6672]: Accepted publickey for core from 10.0.0.1 port 36510 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:06.080324 sshd[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:06.085092 systemd-logind[1445]: New session 28 of user core. Sep 12 17:37:06.095033 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:37:06.320337 sshd[6672]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:06.325896 systemd-logind[1445]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:37:06.326610 systemd[1]: sshd@27-10.0.0.87:22-10.0.0.1:36510.service: Deactivated successfully. Sep 12 17:37:06.330363 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:37:06.331894 systemd-logind[1445]: Removed session 28. Sep 12 17:37:10.355166 update_engine[1451]: I20250912 17:37:10.355050 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:37:10.355627 update_engine[1451]: I20250912 17:37:10.355418 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:37:10.355779 update_engine[1451]: I20250912 17:37:10.355670 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:37:10.365907 update_engine[1451]: E20250912 17:37:10.365846 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:37:10.366050 update_engine[1451]: I20250912 17:37:10.365933 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 17:37:11.336403 systemd[1]: Started sshd@28-10.0.0.87:22-10.0.0.1:58994.service - OpenSSH per-connection server daemon (10.0.0.1:58994). Sep 12 17:37:11.390519 sshd[6731]: Accepted publickey for core from 10.0.0.1 port 58994 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:37:11.392782 sshd[6731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:37:11.398486 systemd-logind[1445]: New session 29 of user core. Sep 12 17:37:11.409088 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 17:37:11.563959 sshd[6731]: pam_unix(sshd:session): session closed for user core Sep 12 17:37:11.569074 systemd[1]: sshd@28-10.0.0.87:22-10.0.0.1:58994.service: Deactivated successfully. Sep 12 17:37:11.572802 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 17:37:11.576255 systemd-logind[1445]: Session 29 logged out. Waiting for processes to exit. Sep 12 17:37:11.577697 systemd-logind[1445]: Removed session 29.