Sep 12 17:31:37.897626 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:31:37.897658 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:31:37.897669 kernel: BIOS-provided physical RAM map: Sep 12 17:31:37.897676 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:31:37.897682 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:31:37.897688 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:31:37.897696 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 17:31:37.897703 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 17:31:37.897709 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 17:31:37.897718 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 17:31:37.897724 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:31:37.897730 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:31:37.897740 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:31:37.897746 kernel: NX (Execute Disable) protection: active Sep 12 17:31:37.897754 kernel: APIC: Static calls initialized Sep 12 17:31:37.897767 kernel: SMBIOS 2.8 present. Sep 12 17:31:37.897774 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 17:31:37.897781 kernel: Hypervisor detected: KVM Sep 12 17:31:37.897787 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:31:37.897794 kernel: kvm-clock: using sched offset of 2644048992 cycles Sep 12 17:31:37.897802 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:31:37.897809 kernel: tsc: Detected 2794.750 MHz processor Sep 12 17:31:37.897816 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:31:37.897823 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:31:37.897833 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 17:31:37.897840 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:31:37.897911 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:31:37.897918 kernel: Using GB pages for direct mapping Sep 12 17:31:37.897926 kernel: ACPI: Early table checksum verification disabled Sep 12 17:31:37.897933 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 17:31:37.897940 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:37.897947 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:37.897954 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:37.897964 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 17:31:37.897971 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:37.897991 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:37.897999 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:37.898006 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:37.898013 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 17:31:37.898020 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 17:31:37.898032 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 17:31:37.898041 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 17:31:37.898049 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 17:31:37.898064 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 17:31:37.898081 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 17:31:37.898101 kernel: No NUMA configuration found Sep 12 17:31:37.898118 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 17:31:37.898139 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 12 17:31:37.898169 kernel: Zone ranges: Sep 12 17:31:37.898186 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:31:37.898195 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 17:31:37.898218 kernel: Normal empty Sep 12 17:31:37.898227 kernel: Movable zone start for each node Sep 12 17:31:37.898234 kernel: Early memory node ranges Sep 12 17:31:37.898242 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:31:37.898249 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 17:31:37.898257 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 17:31:37.898268 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:31:37.898278 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:31:37.898285 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 17:31:37.898293 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:31:37.898300 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:31:37.898307 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:31:37.898315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:31:37.898322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:31:37.898329 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:31:37.898339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:31:37.898346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:31:37.898354 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:31:37.898361 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:31:37.898368 kernel: TSC deadline timer available Sep 12 17:31:37.898376 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 17:31:37.898383 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:31:37.898390 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:31:37.898400 kernel: kvm-guest: setup PV sched yield Sep 12 17:31:37.898411 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 17:31:37.898418 kernel: Booting paravirtualized kernel on KVM Sep 12 17:31:37.898426 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:31:37.898433 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:31:37.898441 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 17:31:37.898448 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 17:31:37.898455 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:31:37.898462 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:31:37.898469 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:31:37.898481 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:31:37.898488 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:31:37.898496 kernel: random: crng init done Sep 12 17:31:37.898503 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:31:37.898511 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:31:37.898518 kernel: Fallback order for Node 0: 0 Sep 12 17:31:37.898525 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 12 17:31:37.898533 kernel: Policy zone: DMA32 Sep 12 17:31:37.898543 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:31:37.898551 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 136900K reserved, 0K cma-reserved) Sep 12 17:31:37.898558 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:31:37.898565 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:31:37.898573 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:31:37.898580 kernel: Dynamic Preempt: voluntary Sep 12 17:31:37.898587 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:31:37.898601 kernel: rcu: RCU event tracing is enabled. Sep 12 17:31:37.898609 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:31:37.898619 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:31:37.898627 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:31:37.898634 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:31:37.898648 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:31:37.898658 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:31:37.898665 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:31:37.898673 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:31:37.898680 kernel: Console: colour VGA+ 80x25 Sep 12 17:31:37.898687 kernel: printk: console [ttyS0] enabled Sep 12 17:31:37.898695 kernel: ACPI: Core revision 20230628 Sep 12 17:31:37.898705 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:31:37.898713 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:31:37.898720 kernel: x2apic enabled Sep 12 17:31:37.898728 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:31:37.898735 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:31:37.898743 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:31:37.898750 kernel: kvm-guest: setup PV IPIs Sep 12 17:31:37.898767 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:31:37.898775 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:31:37.898783 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 12 17:31:37.898790 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:31:37.898800 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:31:37.898808 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:31:37.898816 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:31:37.898823 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:31:37.898831 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:31:37.898842 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:31:37.898861 kernel: active return thunk: retbleed_return_thunk Sep 12 17:31:37.898872 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:31:37.898880 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:31:37.898888 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:31:37.898896 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:31:37.898904 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:31:37.898912 kernel: active return thunk: srso_return_thunk Sep 12 17:31:37.898919 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:31:37.898930 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:31:37.898938 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:31:37.898946 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:31:37.898953 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:31:37.898961 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:31:37.898969 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:31:37.898977 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:31:37.898984 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:31:37.898994 kernel: landlock: Up and running. Sep 12 17:31:37.899002 kernel: SELinux: Initializing. Sep 12 17:31:37.899010 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:31:37.899017 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:31:37.899025 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:31:37.899033 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:31:37.899041 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:31:37.899049 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:31:37.899059 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:31:37.899069 kernel: ... version: 0 Sep 12 17:31:37.899077 kernel: ... bit width: 48 Sep 12 17:31:37.899084 kernel: ... generic registers: 6 Sep 12 17:31:37.899092 kernel: ... value mask: 0000ffffffffffff Sep 12 17:31:37.899100 kernel: ... max period: 00007fffffffffff Sep 12 17:31:37.899107 kernel: ... fixed-purpose events: 0 Sep 12 17:31:37.899115 kernel: ... event mask: 000000000000003f Sep 12 17:31:37.899123 kernel: signal: max sigframe size: 1776 Sep 12 17:31:37.899130 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:31:37.899140 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:31:37.899148 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:31:37.899155 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:31:37.899163 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:31:37.899171 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:31:37.899178 kernel: smpboot: Max logical packages: 1 Sep 12 17:31:37.899186 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 12 17:31:37.899194 kernel: devtmpfs: initialized Sep 12 17:31:37.899201 kernel: x86/mm: Memory block size: 128MB Sep 12 17:31:37.899209 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:31:37.899219 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:31:37.899227 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:31:37.899235 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:31:37.899242 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:31:37.899251 kernel: audit: type=2000 audit(1757698296.794:1): state=initialized audit_enabled=0 res=1 Sep 12 17:31:37.899258 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:31:37.899266 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:31:37.899274 kernel: cpuidle: using governor menu Sep 12 17:31:37.899281 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:31:37.899292 kernel: dca service started, version 1.12.1 Sep 12 17:31:37.899299 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 17:31:37.899307 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 17:31:37.899315 kernel: PCI: Using configuration type 1 for base access Sep 12 17:31:37.899323 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:31:37.899330 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:31:37.899338 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:31:37.899354 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:31:37.899380 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:31:37.899395 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:31:37.899413 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:31:37.899432 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:31:37.899448 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:31:37.899467 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:31:37.899485 kernel: ACPI: Interpreter enabled Sep 12 17:31:37.899504 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:31:37.899519 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:31:37.899527 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:31:37.899537 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:31:37.899545 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:31:37.899552 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:31:37.899879 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:31:37.900034 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:31:37.900164 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:31:37.900174 kernel: PCI host bridge to bus 0000:00 Sep 12 17:31:37.900327 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:31:37.900445 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:31:37.900562 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:31:37.900687 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 17:31:37.900803 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:31:37.900947 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 17:31:37.901120 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:31:37.901316 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:31:37.901474 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 17:31:37.901605 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 12 17:31:37.901744 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 12 17:31:37.901904 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 12 17:31:37.902118 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:31:37.902280 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:31:37.902410 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 12 17:31:37.902536 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 12 17:31:37.902671 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 17:31:37.902817 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:31:37.902966 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 17:31:37.903095 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 12 17:31:37.903229 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 17:31:37.903378 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:31:37.903507 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 12 17:31:37.903633 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 12 17:31:37.903770 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 17:31:37.903913 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 12 17:31:37.904057 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:31:37.904191 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:31:37.904333 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:31:37.904460 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 12 17:31:37.904587 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 12 17:31:37.904737 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:31:37.904900 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 17:31:37.904918 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:31:37.904926 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:31:37.904934 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:31:37.904941 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:31:37.904949 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:31:37.904957 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:31:37.904965 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:31:37.904972 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:31:37.904980 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:31:37.904990 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:31:37.904998 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:31:37.905005 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:31:37.905013 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:31:37.905021 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:31:37.905029 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:31:37.905036 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:31:37.905044 kernel: iommu: Default domain type: Translated Sep 12 17:31:37.905052 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:31:37.905062 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:31:37.905070 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:31:37.905077 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:31:37.905085 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 17:31:37.905213 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:31:37.905341 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:31:37.905467 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:31:37.905477 kernel: vgaarb: loaded Sep 12 17:31:37.905489 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:31:37.905497 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:31:37.905505 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:31:37.905513 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:31:37.905521 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:31:37.905529 kernel: pnp: PnP ACPI init Sep 12 17:31:37.905698 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 17:31:37.905710 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:31:37.905718 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:31:37.905730 kernel: NET: Registered PF_INET protocol family Sep 12 17:31:37.905738 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:31:37.905746 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:31:37.905754 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:31:37.905762 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:31:37.905770 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:31:37.905777 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:31:37.905785 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:31:37.905795 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:31:37.905803 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:31:37.905810 kernel: NET: Registered PF_XDP protocol family Sep 12 17:31:37.905981 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:31:37.906098 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:31:37.906213 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:31:37.906328 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 17:31:37.906443 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 17:31:37.906556 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 17:31:37.906571 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:31:37.906580 kernel: Initialise system trusted keyrings Sep 12 17:31:37.906588 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:31:37.906595 kernel: Key type asymmetric registered Sep 12 17:31:37.906603 kernel: Asymmetric key parser 'x509' registered Sep 12 17:31:37.906611 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:31:37.906619 kernel: io scheduler mq-deadline registered Sep 12 17:31:37.906626 kernel: io scheduler kyber registered Sep 12 17:31:37.906634 kernel: io scheduler bfq registered Sep 12 17:31:37.906653 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:31:37.906661 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:31:37.906669 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:31:37.906677 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:31:37.906684 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:31:37.906693 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:31:37.906701 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:31:37.906709 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:31:37.906717 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:31:37.906881 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:31:37.907007 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:31:37.907018 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Sep 12 17:31:37.907135 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:31:37 UTC (1757698297) Sep 12 17:31:37.907255 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 17:31:37.907265 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:31:37.907273 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:31:37.907281 kernel: Segment Routing with IPv6 Sep 12 17:31:37.907293 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:31:37.907301 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:31:37.907308 kernel: Key type dns_resolver registered Sep 12 17:31:37.907316 kernel: IPI shorthand broadcast: enabled Sep 12 17:31:37.907324 kernel: sched_clock: Marking stable (825003166, 102313508)->(992454472, -65137798) Sep 12 17:31:37.907331 kernel: registered taskstats version 1 Sep 12 17:31:37.907339 kernel: Loading compiled-in X.509 certificates Sep 12 17:31:37.907347 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:31:37.907354 kernel: Key type .fscrypt registered Sep 12 17:31:37.907364 kernel: Key type fscrypt-provisioning registered Sep 12 17:31:37.907372 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:31:37.907380 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:31:37.907387 kernel: ima: No architecture policies found Sep 12 17:31:37.907395 kernel: clk: Disabling unused clocks Sep 12 17:31:37.907402 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:31:37.907410 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:31:37.907418 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:31:37.907428 kernel: Run /init as init process Sep 12 17:31:37.907436 kernel: with arguments: Sep 12 17:31:37.907443 kernel: /init Sep 12 17:31:37.907451 kernel: with environment: Sep 12 17:31:37.907458 kernel: HOME=/ Sep 12 17:31:37.907466 kernel: TERM=linux Sep 12 17:31:37.907473 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:31:37.907483 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:31:37.907492 systemd[1]: Detected virtualization kvm. Sep 12 17:31:37.907503 systemd[1]: Detected architecture x86-64. Sep 12 17:31:37.907511 systemd[1]: Running in initrd. Sep 12 17:31:37.907519 systemd[1]: No hostname configured, using default hostname. Sep 12 17:31:37.907527 systemd[1]: Hostname set to . Sep 12 17:31:37.907536 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:31:37.907544 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:31:37.907552 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:31:37.907561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:31:37.907572 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:31:37.907592 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:31:37.907603 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:31:37.907612 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:31:37.907622 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:31:37.907633 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:31:37.907651 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:31:37.907659 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:31:37.907668 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:31:37.907676 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:31:37.907684 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:31:37.907693 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:31:37.907702 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:31:37.907713 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:31:37.907721 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:31:37.907730 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:31:37.907738 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:31:37.907747 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:31:37.907756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:31:37.907764 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:31:37.907772 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:31:37.907784 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:31:37.907792 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:31:37.907800 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:31:37.907809 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:31:37.907817 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:31:37.907825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:31:37.907834 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:31:37.907843 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:31:37.907955 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:31:37.907990 systemd-journald[192]: Collecting audit messages is disabled. Sep 12 17:31:37.908013 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:31:37.908024 systemd-journald[192]: Journal started Sep 12 17:31:37.908045 systemd-journald[192]: Runtime Journal (/run/log/journal/96161d43ce254bdbb71e8cb9c3115850) is 6.0M, max 48.4M, 42.3M free. Sep 12 17:31:37.899906 systemd-modules-load[193]: Inserted module 'overlay' Sep 12 17:31:37.935004 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:31:37.935021 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:31:37.935032 kernel: Bridge firewalling registered Sep 12 17:31:37.926790 systemd-modules-load[193]: Inserted module 'br_netfilter' Sep 12 17:31:37.933573 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:31:37.937410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:37.956525 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:31:37.957336 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:31:37.961784 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:31:37.970198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:31:37.973035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:31:37.974619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:31:37.978369 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:31:37.980020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:31:37.984062 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:31:37.990483 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:31:37.993965 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:31:38.014355 systemd-resolved[222]: Positive Trust Anchors: Sep 12 17:31:38.014375 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:31:38.014407 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:31:38.017075 systemd-resolved[222]: Defaulting to hostname 'linux'. Sep 12 17:31:38.025386 dracut-cmdline[229]: dracut-dracut-053 Sep 12 17:31:38.025386 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:31:38.018324 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:31:38.024827 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:31:38.123900 kernel: SCSI subsystem initialized Sep 12 17:31:38.132876 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:31:38.142883 kernel: iscsi: registered transport (tcp) Sep 12 17:31:38.167886 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:31:38.167927 kernel: QLogic iSCSI HBA Driver Sep 12 17:31:38.221154 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:31:38.230000 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:31:38.254242 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:31:38.254302 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:31:38.255224 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:31:38.296889 kernel: raid6: avx2x4 gen() 29912 MB/s Sep 12 17:31:38.313873 kernel: raid6: avx2x2 gen() 30694 MB/s Sep 12 17:31:38.330924 kernel: raid6: avx2x1 gen() 25936 MB/s Sep 12 17:31:38.330949 kernel: raid6: using algorithm avx2x2 gen() 30694 MB/s Sep 12 17:31:38.348928 kernel: raid6: .... xor() 19784 MB/s, rmw enabled Sep 12 17:31:38.348974 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:31:38.368878 kernel: xor: automatically using best checksumming function avx Sep 12 17:31:38.524893 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:31:38.539484 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:31:38.550078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:31:38.562310 systemd-udevd[412]: Using default interface naming scheme 'v255'. Sep 12 17:31:38.567080 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:31:38.579061 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:31:38.594312 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 12 17:31:38.628335 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:31:38.645984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:31:38.712188 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:31:38.722043 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:31:38.733717 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:31:38.736027 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:31:38.737635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:31:38.741134 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:31:38.744869 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:31:38.748212 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:31:38.756857 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:31:38.756882 kernel: GPT:9289727 != 19775487 Sep 12 17:31:38.756894 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:31:38.756911 kernel: GPT:9289727 != 19775487 Sep 12 17:31:38.756920 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:31:38.756930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:31:38.758053 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:31:38.768866 kernel: libata version 3.00 loaded. Sep 12 17:31:38.773897 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:31:38.774225 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:31:38.782080 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:31:38.782286 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:31:38.782299 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:31:38.782444 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:31:38.787268 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:31:38.787325 kernel: AES CTR mode by8 optimization enabled Sep 12 17:31:38.787864 kernel: scsi host0: ahci Sep 12 17:31:38.788883 kernel: scsi host1: ahci Sep 12 17:31:38.792913 kernel: scsi host2: ahci Sep 12 17:31:38.795963 kernel: scsi host3: ahci Sep 12 17:31:38.796209 kernel: scsi host4: ahci Sep 12 17:31:38.798408 kernel: scsi host5: ahci Sep 12 17:31:38.798649 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Sep 12 17:31:38.798668 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Sep 12 17:31:38.798235 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:31:38.809798 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Sep 12 17:31:38.809821 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Sep 12 17:31:38.809837 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Sep 12 17:31:38.809880 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Sep 12 17:31:38.798390 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:31:38.812874 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Sep 12 17:31:38.814869 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (467) Sep 12 17:31:38.815520 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:31:38.816791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:31:38.816954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:38.818733 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:31:38.830068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:31:38.850658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:31:38.881181 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:38.888476 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:31:38.894717 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:31:38.897912 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:31:38.905919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:31:38.923974 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:31:38.926992 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:31:38.942505 disk-uuid[555]: Primary Header is updated. Sep 12 17:31:38.942505 disk-uuid[555]: Secondary Entries is updated. Sep 12 17:31:38.942505 disk-uuid[555]: Secondary Header is updated. Sep 12 17:31:38.944881 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:31:38.949886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:31:38.951376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:31:38.955252 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:31:39.111891 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:31:39.119880 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:31:39.122881 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:31:39.128873 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:31:39.128893 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:31:39.128904 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:31:39.130256 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:31:39.130269 kernel: ata3.00: applying bridge limits Sep 12 17:31:39.131264 kernel: ata3.00: configured for UDMA/100 Sep 12 17:31:39.131890 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:31:39.182355 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:31:39.182586 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:31:39.194942 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:31:39.953744 disk-uuid[561]: The operation has completed successfully. Sep 12 17:31:39.954991 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:31:39.982689 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:31:39.982833 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:31:40.012037 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:31:40.015574 sh[594]: Success Sep 12 17:31:40.029908 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:31:40.066335 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:31:40.075343 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:31:40.078459 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:31:40.092929 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:31:40.092960 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:31:40.092971 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:31:40.093872 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:31:40.095120 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:31:40.099922 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:31:40.100540 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:31:40.101353 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:31:40.105211 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:31:40.116462 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:31:40.116489 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:31:40.116500 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:31:40.118877 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:31:40.128120 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:31:40.129782 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:31:40.138755 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:31:40.142010 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:31:40.205556 ignition[684]: Ignition 2.19.0 Sep 12 17:31:40.206026 ignition[684]: Stage: fetch-offline Sep 12 17:31:40.206065 ignition[684]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:40.206076 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:40.206168 ignition[684]: parsed url from cmdline: "" Sep 12 17:31:40.206172 ignition[684]: no config URL provided Sep 12 17:31:40.206178 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:31:40.206188 ignition[684]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:31:40.206216 ignition[684]: op(1): [started] loading QEMU firmware config module Sep 12 17:31:40.206222 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:31:40.215522 ignition[684]: op(1): [finished] loading QEMU firmware config module Sep 12 17:31:40.237684 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:31:40.251010 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:31:40.257089 ignition[684]: parsing config with SHA512: 4f65d8ca35a59918e7b7c2c471db1afd023294fb2296feda55011aef6f844711026e47cd64fd109bc2a891a17ef2ab76be6153f51e28057f0aff1de3b667eebb Sep 12 17:31:40.260548 unknown[684]: fetched base config from "system" Sep 12 17:31:40.260561 unknown[684]: fetched user config from "qemu" Sep 12 17:31:40.262333 ignition[684]: fetch-offline: fetch-offline passed Sep 12 17:31:40.263141 ignition[684]: Ignition finished successfully Sep 12 17:31:40.266183 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:31:40.273972 systemd-networkd[785]: lo: Link UP Sep 12 17:31:40.273983 systemd-networkd[785]: lo: Gained carrier Sep 12 17:31:40.275757 systemd-networkd[785]: Enumeration completed Sep 12 17:31:40.275941 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:31:40.276208 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:31:40.276212 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:31:40.277040 systemd-networkd[785]: eth0: Link UP Sep 12 17:31:40.277044 systemd-networkd[785]: eth0: Gained carrier Sep 12 17:31:40.277051 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:31:40.278099 systemd[1]: Reached target network.target - Network. Sep 12 17:31:40.279944 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:31:40.291015 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:31:40.297898 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:31:40.304990 ignition[788]: Ignition 2.19.0 Sep 12 17:31:40.305005 ignition[788]: Stage: kargs Sep 12 17:31:40.305165 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:40.305177 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:40.306089 ignition[788]: kargs: kargs passed Sep 12 17:31:40.306153 ignition[788]: Ignition finished successfully Sep 12 17:31:40.309441 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:31:40.318144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:31:40.331089 ignition[797]: Ignition 2.19.0 Sep 12 17:31:40.331099 ignition[797]: Stage: disks Sep 12 17:31:40.331261 ignition[797]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:40.331273 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:40.332087 ignition[797]: disks: disks passed Sep 12 17:31:40.334493 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:31:40.332132 ignition[797]: Ignition finished successfully Sep 12 17:31:40.336391 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:31:40.338167 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:31:40.340100 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:31:40.342118 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:31:40.344266 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:31:40.353196 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:31:40.365833 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:31:40.372148 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:31:40.385956 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:31:40.470879 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:31:40.471683 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:31:40.474029 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:31:40.487929 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:31:40.490369 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:31:40.493110 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:31:40.493164 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:31:40.493188 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:31:40.499889 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Sep 12 17:31:40.502174 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:31:40.502203 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:31:40.502229 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:31:40.503340 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:31:40.506163 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:31:40.507900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:31:40.519973 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:31:40.554752 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:31:40.559438 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:31:40.564241 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:31:40.569495 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:31:40.667228 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:31:40.681105 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:31:40.684395 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:31:40.689874 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:31:40.710759 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:31:40.712886 ignition[929]: INFO : Ignition 2.19.0 Sep 12 17:31:40.712886 ignition[929]: INFO : Stage: mount Sep 12 17:31:40.712886 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:40.712886 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:40.717745 ignition[929]: INFO : mount: mount passed Sep 12 17:31:40.717745 ignition[929]: INFO : Ignition finished successfully Sep 12 17:31:40.715217 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:31:40.728958 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:31:41.092594 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:31:41.107002 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:31:41.114309 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Sep 12 17:31:41.114345 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:31:41.114357 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:31:41.115152 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:31:41.118870 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:31:41.119907 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:31:41.147891 ignition[961]: INFO : Ignition 2.19.0 Sep 12 17:31:41.147891 ignition[961]: INFO : Stage: files Sep 12 17:31:41.149652 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:41.149652 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:41.149652 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:31:41.153239 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:31:41.153239 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:31:41.153239 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:31:41.153239 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:31:41.153239 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:31:41.152773 unknown[961]: wrote ssh authorized keys file for user: core Sep 12 17:31:41.160949 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:31:41.160949 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:31:41.208140 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:31:41.536029 systemd-networkd[785]: eth0: Gained IPv6LL Sep 12 17:31:41.551562 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:31:41.553573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:31:41.553573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:31:41.553573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:31:41.553573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:31:41.553573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:31:41.553573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:31:41.553573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:31:41.553573 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:31:41.568108 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:31:41.568108 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:31:41.568108 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:31:41.568108 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:31:41.568108 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:31:41.568108 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:31:42.053415 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:31:42.735628 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:31:42.735628 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:31:42.739544 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:31:42.739544 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:31:42.739544 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:31:42.739544 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:31:42.739544 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:31:42.739544 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:31:42.739544 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:31:42.739544 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:31:42.764579 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:31:42.769216 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:31:42.770753 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:31:42.770753 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:31:42.770753 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:31:42.770753 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:31:42.770753 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:31:42.770753 ignition[961]: INFO : files: files passed Sep 12 17:31:42.770753 ignition[961]: INFO : Ignition finished successfully Sep 12 17:31:42.772984 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:31:42.785993 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:31:42.788663 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:31:42.790584 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:31:42.790719 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:31:42.799098 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:31:42.802073 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:31:42.802073 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:31:42.805164 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:31:42.808069 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:31:42.810681 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:31:42.825002 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:31:42.848682 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:31:42.848807 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:31:42.851458 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:31:42.853892 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:31:42.855118 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:31:42.868993 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:31:42.882556 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:31:42.885296 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:31:42.898295 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:31:42.899574 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:31:42.901901 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:31:42.904093 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:31:42.904211 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:31:42.906738 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:31:42.908267 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:31:42.910266 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:31:42.912299 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:31:42.914281 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:31:42.916411 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:31:42.918499 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:31:42.920742 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:31:42.922872 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:31:42.924907 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:31:42.926639 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:31:42.926768 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:31:42.929049 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:31:42.930455 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:31:42.932503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:31:42.932628 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:31:42.934653 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:31:42.934763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:31:42.937091 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:31:42.937255 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:31:42.938971 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:31:42.940635 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:31:42.944942 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:31:42.947426 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:31:42.949136 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:31:42.951210 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:31:42.951308 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:31:42.953815 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:31:42.953925 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:31:42.955721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:31:42.955837 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:31:42.957791 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:31:42.957917 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:31:42.967990 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:31:42.969062 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:31:42.969176 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:31:42.974726 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:31:42.977051 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:31:42.978690 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:31:42.980331 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:31:42.984755 ignition[1015]: INFO : Ignition 2.19.0 Sep 12 17:31:42.984755 ignition[1015]: INFO : Stage: umount Sep 12 17:31:42.984755 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:42.984755 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:42.984755 ignition[1015]: INFO : umount: umount passed Sep 12 17:31:42.984755 ignition[1015]: INFO : Ignition finished successfully Sep 12 17:31:42.980478 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:31:42.986535 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:31:42.986660 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:31:42.989643 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:31:42.989770 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:31:42.993502 systemd[1]: Stopped target network.target - Network. Sep 12 17:31:42.995050 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:31:42.995117 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:31:42.996924 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:31:42.996977 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:31:42.997875 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:31:42.997925 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:31:43.000601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:31:43.000653 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:31:43.002931 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:31:43.003841 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:31:43.009901 systemd-networkd[785]: eth0: DHCPv6 lease lost Sep 12 17:31:43.011816 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:31:43.011974 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:31:43.014657 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:31:43.014837 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:31:43.016792 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:31:43.016875 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:31:43.021937 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:31:43.024568 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:31:43.024629 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:31:43.025310 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:31:43.025357 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:31:43.025662 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:31:43.025707 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:31:43.026152 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:31:43.026196 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:31:43.026618 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:31:43.041448 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:31:43.041577 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:31:43.054100 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:31:43.059011 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:31:43.060123 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:31:43.063057 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:31:43.063120 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:31:43.066118 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:31:43.066165 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:31:43.069158 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:31:43.070100 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:31:43.072304 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:31:43.073238 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:31:43.075319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:31:43.075379 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:31:43.087064 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:31:43.088179 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:31:43.088247 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:31:43.090576 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:31:43.090630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:43.100636 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:31:43.101897 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:31:43.235145 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:31:43.236335 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:31:43.238960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:31:43.241013 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:31:43.241094 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:31:43.262127 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:31:43.269462 systemd[1]: Switching root. Sep 12 17:31:43.300481 systemd-journald[192]: Journal stopped Sep 12 17:31:44.466042 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 12 17:31:44.466120 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:31:44.466140 kernel: SELinux: policy capability open_perms=1 Sep 12 17:31:44.466157 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:31:44.466168 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:31:44.466186 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:31:44.466199 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:31:44.466211 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:31:44.466222 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:31:44.466233 kernel: audit: type=1403 audit(1757698303.694:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:31:44.466246 systemd[1]: Successfully loaded SELinux policy in 45.898ms. Sep 12 17:31:44.466267 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.076ms. Sep 12 17:31:44.466280 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:31:44.466293 systemd[1]: Detected virtualization kvm. Sep 12 17:31:44.466310 systemd[1]: Detected architecture x86-64. Sep 12 17:31:44.466322 systemd[1]: Detected first boot. Sep 12 17:31:44.466334 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:31:44.466347 zram_generator::config[1060]: No configuration found. Sep 12 17:31:44.466365 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:31:44.466377 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:31:44.466389 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:31:44.466401 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:31:44.466419 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:31:44.466431 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:31:44.466444 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:31:44.466456 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:31:44.466468 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:31:44.466489 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:31:44.466501 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:31:44.466513 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:31:44.466525 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:31:44.466543 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:31:44.466556 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:31:44.466568 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:31:44.466580 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:31:44.466592 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:31:44.466605 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:31:44.466616 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:31:44.466628 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:31:44.466640 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:31:44.466658 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:31:44.466670 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:31:44.466682 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:31:44.466694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:31:44.466706 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:31:44.466718 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:31:44.466730 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:31:44.466746 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:31:44.466759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:31:44.466772 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:31:44.466784 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:31:44.466795 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:31:44.466807 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:31:44.466819 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:31:44.466831 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:31:44.466844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:31:44.466874 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:31:44.466886 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:31:44.466899 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:31:44.466911 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:31:44.466923 systemd[1]: Reached target machines.target - Containers. Sep 12 17:31:44.466935 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:31:44.466947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:31:44.466959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:31:44.466972 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:31:44.466988 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:31:44.467001 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:31:44.467013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:31:44.467027 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:31:44.467042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:31:44.467054 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:31:44.467066 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:31:44.467078 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:31:44.467096 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:31:44.467108 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:31:44.467120 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:31:44.467132 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:31:44.467144 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:31:44.467155 kernel: loop: module loaded Sep 12 17:31:44.467167 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:31:44.467179 kernel: ACPI: bus type drm_connector registered Sep 12 17:31:44.467190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:31:44.467207 kernel: fuse: init (API version 7.39) Sep 12 17:31:44.467219 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:31:44.467231 systemd[1]: Stopped verity-setup.service. Sep 12 17:31:44.467243 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:31:44.467272 systemd-journald[1130]: Collecting audit messages is disabled. Sep 12 17:31:44.467294 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:31:44.467312 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:31:44.467324 systemd-journald[1130]: Journal started Sep 12 17:31:44.467347 systemd-journald[1130]: Runtime Journal (/run/log/journal/96161d43ce254bdbb71e8cb9c3115850) is 6.0M, max 48.4M, 42.3M free. Sep 12 17:31:44.240507 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:31:44.257022 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:31:44.257491 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:31:44.469882 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:31:44.471583 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:31:44.472697 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:31:44.473898 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:31:44.475124 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:31:44.476360 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:31:44.477790 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:31:44.479323 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:31:44.479511 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:31:44.481051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:31:44.481224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:31:44.482659 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:31:44.482835 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:31:44.484362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:31:44.484542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:31:44.486059 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:31:44.486232 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:31:44.487700 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:31:44.487885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:31:44.489247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:31:44.490643 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:31:44.492308 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:31:44.505865 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:31:44.522978 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:31:44.525753 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:31:44.526918 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:31:44.526959 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:31:44.528997 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:31:44.531358 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:31:44.538137 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:31:44.539575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:31:44.541653 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:31:44.546967 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:31:44.548386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:31:44.550039 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:31:44.551423 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:31:44.554452 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:31:44.568002 systemd-journald[1130]: Time spent on flushing to /var/log/journal/96161d43ce254bdbb71e8cb9c3115850 is 28.381ms for 949 entries. Sep 12 17:31:44.568002 systemd-journald[1130]: System Journal (/var/log/journal/96161d43ce254bdbb71e8cb9c3115850) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:31:44.649037 systemd-journald[1130]: Received client request to flush runtime journal. Sep 12 17:31:44.649084 kernel: loop0: detected capacity change from 0 to 142488 Sep 12 17:31:44.557589 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:31:44.561251 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:31:44.564354 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:31:44.566260 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:31:44.567976 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:31:44.575783 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:31:44.582354 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:31:44.597150 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:31:44.598551 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:31:44.644350 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:31:44.648350 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:31:44.653323 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:31:44.660669 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:31:44.661897 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:31:44.663721 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:31:44.675270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:31:44.678934 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:31:44.679612 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:31:44.698902 kernel: loop1: detected capacity change from 0 to 224512 Sep 12 17:31:44.701968 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Sep 12 17:31:44.701989 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Sep 12 17:31:44.709835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:31:44.745911 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 17:31:44.782877 kernel: loop3: detected capacity change from 0 to 142488 Sep 12 17:31:44.817883 kernel: loop4: detected capacity change from 0 to 224512 Sep 12 17:31:44.826873 kernel: loop5: detected capacity change from 0 to 140768 Sep 12 17:31:44.838124 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:31:44.838731 (sd-merge)[1200]: Merged extensions into '/usr'. Sep 12 17:31:44.842715 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:31:44.842729 systemd[1]: Reloading... Sep 12 17:31:44.907568 zram_generator::config[1227]: No configuration found. Sep 12 17:31:45.031986 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:31:45.054351 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:31:45.103557 systemd[1]: Reloading finished in 260 ms. Sep 12 17:31:45.142886 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:31:45.144437 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:31:45.158008 systemd[1]: Starting ensure-sysext.service... Sep 12 17:31:45.159903 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:31:45.167561 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:31:45.167577 systemd[1]: Reloading... Sep 12 17:31:45.194591 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:31:45.194999 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:31:45.196003 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:31:45.196298 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Sep 12 17:31:45.196383 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Sep 12 17:31:45.199645 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:31:45.199657 systemd-tmpfiles[1264]: Skipping /boot Sep 12 17:31:45.214804 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:31:45.214818 systemd-tmpfiles[1264]: Skipping /boot Sep 12 17:31:45.237912 zram_generator::config[1293]: No configuration found. Sep 12 17:31:45.340075 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:31:45.389445 systemd[1]: Reloading finished in 221 ms. Sep 12 17:31:45.411667 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:31:45.431424 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:31:45.449249 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:31:45.453010 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:31:45.456386 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:31:45.462008 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:31:45.471938 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:31:45.475446 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:31:45.481405 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:31:45.481739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:31:45.485679 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:31:45.488919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:31:45.494180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:31:45.495588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:31:45.499059 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:31:45.500226 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:31:45.505607 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:31:45.505894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:31:45.506238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:31:45.506724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:31:45.509737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:31:45.517177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:31:45.521491 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:31:45.526290 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:31:45.527103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:31:45.528554 augenrules[1354]: No rules Sep 12 17:31:45.529494 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:31:45.529679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:31:45.531719 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:31:45.534653 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Sep 12 17:31:45.537671 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:31:45.544181 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:31:45.544391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:31:45.558281 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:31:45.564998 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:31:45.567497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:31:45.572106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:31:45.573259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:31:45.576102 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:31:45.577930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:31:45.578775 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:31:45.581944 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:31:45.587269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:31:45.587476 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:31:45.589160 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:31:45.589341 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:31:45.591249 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:31:45.593456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:31:45.593644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:31:45.604803 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:31:45.605024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:31:45.611607 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:31:45.615215 systemd[1]: Finished ensure-sysext.service. Sep 12 17:31:45.631881 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:31:45.632919 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:31:45.632985 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:31:45.646403 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:31:45.648275 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:31:45.648456 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:31:45.658158 systemd-resolved[1334]: Positive Trust Anchors: Sep 12 17:31:45.659063 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:31:45.659099 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:31:45.663687 systemd-resolved[1334]: Defaulting to hostname 'linux'. Sep 12 17:31:45.666537 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1382) Sep 12 17:31:45.666454 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:31:45.677112 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:31:45.702242 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:31:45.706272 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:31:45.711095 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:31:45.715890 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:31:45.720243 systemd-networkd[1401]: lo: Link UP Sep 12 17:31:45.720258 systemd-networkd[1401]: lo: Gained carrier Sep 12 17:31:45.722793 systemd-networkd[1401]: Enumeration completed Sep 12 17:31:45.723257 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:31:45.723268 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:31:45.723987 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:31:45.724157 systemd[1]: Reached target network.target - Network. Sep 12 17:31:45.725586 systemd-networkd[1401]: eth0: Link UP Sep 12 17:31:45.725597 systemd-networkd[1401]: eth0: Gained carrier Sep 12 17:31:45.725619 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:31:45.735190 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:31:45.736988 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:31:45.738383 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:31:45.745878 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:31:45.746195 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:31:45.746387 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:31:45.750699 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:31:45.751877 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 17:31:45.755913 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:31:45.758341 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Sep 12 17:31:45.759202 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:31:45.759292 systemd-timesyncd[1405]: Initial clock synchronization to Fri 2025-09-12 17:31:45.880116 UTC. Sep 12 17:31:45.802065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:31:45.858887 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:31:45.871766 kernel: kvm_amd: TSC scaling supported Sep 12 17:31:45.871883 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:31:45.871899 kernel: kvm_amd: Nested Paging enabled Sep 12 17:31:45.871911 kernel: kvm_amd: LBR virtualization supported Sep 12 17:31:45.873864 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:31:45.873890 kernel: kvm_amd: Virtual GIF supported Sep 12 17:31:45.895868 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:31:45.926582 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:31:45.935098 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:31:45.936601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:45.947107 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:31:45.993066 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:31:45.994528 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:31:45.995600 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:31:45.996719 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:31:45.997944 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:31:45.999332 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:31:46.000504 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:31:46.001705 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:31:46.002896 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:31:46.002924 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:31:46.003788 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:31:46.005470 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:31:46.008160 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:31:46.020344 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:31:46.022859 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:31:46.024404 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:31:46.025530 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:31:46.026472 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:31:46.027408 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:31:46.027444 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:31:46.028520 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:31:46.030827 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:31:46.036010 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:31:46.039205 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:31:46.040933 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:31:46.041920 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:31:46.042718 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:31:46.044842 jq[1435]: false Sep 12 17:31:46.048004 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:31:46.051805 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:31:46.054406 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:31:46.062110 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:31:46.063794 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:31:46.064485 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:31:46.065452 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:31:46.069502 dbus-daemon[1434]: [system] SELinux support is enabled Sep 12 17:31:46.069751 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:31:46.071686 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:31:46.074803 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:31:46.078051 extend-filesystems[1436]: Found loop3 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found loop4 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found loop5 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found sr0 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found vda Sep 12 17:31:46.085382 extend-filesystems[1436]: Found vda1 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found vda2 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found vda3 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found usr Sep 12 17:31:46.085382 extend-filesystems[1436]: Found vda4 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found vda6 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found vda7 Sep 12 17:31:46.085382 extend-filesystems[1436]: Found vda9 Sep 12 17:31:46.085382 extend-filesystems[1436]: Checking size of /dev/vda9 Sep 12 17:31:46.085086 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:31:46.121464 extend-filesystems[1436]: Resized partition /dev/vda9 Sep 12 17:31:46.087016 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:31:46.123043 jq[1446]: true Sep 12 17:31:46.087386 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:31:46.123527 update_engine[1445]: I20250912 17:31:46.118592 1445 main.cc:92] Flatcar Update Engine starting Sep 12 17:31:46.123527 update_engine[1445]: I20250912 17:31:46.122821 1445 update_check_scheduler.cc:74] Next update check in 5m21s Sep 12 17:31:46.203210 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:31:46.203251 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:31:46.203266 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1387) Sep 12 17:31:46.087600 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:31:46.203501 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:31:46.091533 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:31:46.092059 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:31:46.213547 jq[1460]: true Sep 12 17:31:46.112387 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:31:46.112457 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:31:46.115097 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:31:46.115120 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:31:46.171031 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:31:46.177781 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:31:46.187971 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:31:46.218544 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:31:46.218544 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:31:46.218544 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:31:46.223077 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Sep 12 17:31:46.227722 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:31:46.228163 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:31:46.244763 tar[1457]: linux-amd64/LICENSE Sep 12 17:31:46.253632 tar[1457]: linux-amd64/helm Sep 12 17:31:46.294512 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:31:46.294984 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:31:46.295514 systemd-logind[1442]: New seat seat0. Sep 12 17:31:46.297475 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:31:46.317525 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:31:46.325723 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:31:46.328051 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:31:46.331057 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:31:46.614461 containerd[1459]: time="2025-09-12T17:31:46.614344693Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:31:46.618684 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:31:46.647275 containerd[1459]: time="2025-09-12T17:31:46.647227971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:31:46.649217 containerd[1459]: time="2025-09-12T17:31:46.649074983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:31:46.649217 containerd[1459]: time="2025-09-12T17:31:46.649125835Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:31:46.649217 containerd[1459]: time="2025-09-12T17:31:46.649144365Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:31:46.649380 containerd[1459]: time="2025-09-12T17:31:46.649337002Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:31:46.649380 containerd[1459]: time="2025-09-12T17:31:46.649356437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:31:46.649336 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:31:46.649704 containerd[1459]: time="2025-09-12T17:31:46.649437779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:31:46.649704 containerd[1459]: time="2025-09-12T17:31:46.649458242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:31:46.649704 containerd[1459]: time="2025-09-12T17:31:46.649680881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:31:46.649704 containerd[1459]: time="2025-09-12T17:31:46.649702065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:31:46.649787 containerd[1459]: time="2025-09-12T17:31:46.649715521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:31:46.649787 containerd[1459]: time="2025-09-12T17:31:46.649726261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:31:46.649853 containerd[1459]: time="2025-09-12T17:31:46.649831636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:31:46.650174 containerd[1459]: time="2025-09-12T17:31:46.650141363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:31:46.650311 containerd[1459]: time="2025-09-12T17:31:46.650283707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:31:46.650311 containerd[1459]: time="2025-09-12T17:31:46.650302614Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:31:46.650451 containerd[1459]: time="2025-09-12T17:31:46.650417822Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:31:46.650491 containerd[1459]: time="2025-09-12T17:31:46.650483361Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:31:46.657104 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:31:46.662789 containerd[1459]: time="2025-09-12T17:31:46.662746061Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:31:46.662855 containerd[1459]: time="2025-09-12T17:31:46.662800992Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:31:46.662855 containerd[1459]: time="2025-09-12T17:31:46.662818535Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:31:46.662855 containerd[1459]: time="2025-09-12T17:31:46.662833954Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:31:46.662855 containerd[1459]: time="2025-09-12T17:31:46.662847643Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:31:46.663059 containerd[1459]: time="2025-09-12T17:31:46.663025522Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:31:46.663743 containerd[1459]: time="2025-09-12T17:31:46.663706619Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:31:46.664577 containerd[1459]: time="2025-09-12T17:31:46.664536121Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:31:46.664577 containerd[1459]: time="2025-09-12T17:31:46.664576833Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:31:46.664647 containerd[1459]: time="2025-09-12T17:31:46.664596990Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:31:46.664647 containerd[1459]: time="2025-09-12T17:31:46.664617423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:31:46.664647 containerd[1459]: time="2025-09-12T17:31:46.664633847Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:31:46.664738 containerd[1459]: time="2025-09-12T17:31:46.664650507Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:31:46.664738 containerd[1459]: time="2025-09-12T17:31:46.664669495Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:31:46.664738 containerd[1459]: time="2025-09-12T17:31:46.664688097Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:31:46.664738 containerd[1459]: time="2025-09-12T17:31:46.664708010Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:31:46.664833 containerd[1459]: time="2025-09-12T17:31:46.664741572Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:31:46.664833 containerd[1459]: time="2025-09-12T17:31:46.664760825Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:31:46.664833 containerd[1459]: time="2025-09-12T17:31:46.664788660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.664833 containerd[1459]: time="2025-09-12T17:31:46.664806113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.664833 containerd[1459]: time="2025-09-12T17:31:46.664821144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665001 containerd[1459]: time="2025-09-12T17:31:46.664838424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665001 containerd[1459]: time="2025-09-12T17:31:46.664855561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665719 containerd[1459]: time="2025-09-12T17:31:46.665687870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665762 containerd[1459]: time="2025-09-12T17:31:46.665721574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665762 containerd[1459]: time="2025-09-12T17:31:46.665738152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665762 containerd[1459]: time="2025-09-12T17:31:46.665755675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665835 containerd[1459]: time="2025-09-12T17:31:46.665774531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665835 containerd[1459]: time="2025-09-12T17:31:46.665790844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665835 containerd[1459]: time="2025-09-12T17:31:46.665806415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.665835 containerd[1459]: time="2025-09-12T17:31:46.665824712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.666040 containerd[1459]: time="2025-09-12T17:31:46.665842907Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:31:46.666040 containerd[1459]: time="2025-09-12T17:31:46.665887076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.666040 containerd[1459]: time="2025-09-12T17:31:46.665903827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.666040 containerd[1459]: time="2025-09-12T17:31:46.665932955Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:31:46.666040 containerd[1459]: time="2025-09-12T17:31:46.665996764Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:31:46.666040 containerd[1459]: time="2025-09-12T17:31:46.666018060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:31:46.666040 containerd[1459]: time="2025-09-12T17:31:46.666032318Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:31:46.666040 containerd[1459]: time="2025-09-12T17:31:46.666049354Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:31:46.666237 containerd[1459]: time="2025-09-12T17:31:46.666060786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.666237 containerd[1459]: time="2025-09-12T17:31:46.666080373Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:31:46.666237 containerd[1459]: time="2025-09-12T17:31:46.666094694Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:31:46.666237 containerd[1459]: time="2025-09-12T17:31:46.666105931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:31:46.667524 containerd[1459]: time="2025-09-12T17:31:46.666445844Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:31:46.667524 containerd[1459]: time="2025-09-12T17:31:46.666514332Z" level=info msg="Connect containerd service" Sep 12 17:31:46.667524 containerd[1459]: time="2025-09-12T17:31:46.666565447Z" level=info msg="using legacy CRI server" Sep 12 17:31:46.667524 containerd[1459]: time="2025-09-12T17:31:46.666573736Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:31:46.667524 containerd[1459]: time="2025-09-12T17:31:46.666694356Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:31:46.669655 containerd[1459]: time="2025-09-12T17:31:46.669611491Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:31:46.670005 containerd[1459]: time="2025-09-12T17:31:46.669788293Z" level=info msg="Start subscribing containerd event" Sep 12 17:31:46.670005 containerd[1459]: time="2025-09-12T17:31:46.669886009Z" level=info msg="Start recovering state" Sep 12 17:31:46.670005 containerd[1459]: time="2025-09-12T17:31:46.669959662Z" level=info msg="Start event monitor" Sep 12 17:31:46.670005 containerd[1459]: time="2025-09-12T17:31:46.669980216Z" level=info msg="Start snapshots syncer" Sep 12 17:31:46.670005 containerd[1459]: time="2025-09-12T17:31:46.669995716Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:31:46.670005 containerd[1459]: time="2025-09-12T17:31:46.670003832Z" level=info msg="Start streaming server" Sep 12 17:31:46.670597 containerd[1459]: time="2025-09-12T17:31:46.670564177Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:31:46.670664 containerd[1459]: time="2025-09-12T17:31:46.670638420Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:31:46.672526 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:31:46.672766 containerd[1459]: time="2025-09-12T17:31:46.671736135Z" level=info msg="containerd successfully booted in 0.059076s" Sep 12 17:31:46.674399 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:31:46.674652 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:31:46.686094 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:31:46.711507 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:31:46.724280 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:31:46.727013 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:31:46.728536 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:31:46.842459 tar[1457]: linux-amd64/README.md Sep 12 17:31:46.864173 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:31:47.617703 systemd-networkd[1401]: eth0: Gained IPv6LL Sep 12 17:31:47.621280 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:31:47.623110 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:31:47.632100 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:31:47.634848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:47.637214 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:31:47.656757 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:31:47.657032 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:31:47.659567 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:31:47.662983 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:31:48.814373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:48.816071 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:31:48.817409 systemd[1]: Startup finished in 967ms (kernel) + 5.979s (initrd) + 5.166s (userspace) = 12.113s. Sep 12 17:31:48.837325 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:31:49.492257 kubelet[1547]: E0912 17:31:49.492184 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:31:49.496791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:31:49.497059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:31:49.497493 systemd[1]: kubelet.service: Consumed 1.683s CPU time. Sep 12 17:31:51.107982 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:31:51.109447 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:39300.service - OpenSSH per-connection server daemon (10.0.0.1:39300). Sep 12 17:31:51.157701 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 39300 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:31:51.159754 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:51.169012 systemd-logind[1442]: New session 1 of user core. Sep 12 17:31:51.170328 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:31:51.181088 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:31:51.193826 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:31:51.205144 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:31:51.208292 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:31:51.324721 systemd[1564]: Queued start job for default target default.target. Sep 12 17:31:51.336191 systemd[1564]: Created slice app.slice - User Application Slice. Sep 12 17:31:51.336219 systemd[1564]: Reached target paths.target - Paths. Sep 12 17:31:51.336243 systemd[1564]: Reached target timers.target - Timers. Sep 12 17:31:51.337829 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:31:51.349451 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:31:51.349583 systemd[1564]: Reached target sockets.target - Sockets. Sep 12 17:31:51.349602 systemd[1564]: Reached target basic.target - Basic System. Sep 12 17:31:51.349639 systemd[1564]: Reached target default.target - Main User Target. Sep 12 17:31:51.349673 systemd[1564]: Startup finished in 134ms. Sep 12 17:31:51.350188 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:31:51.351944 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:31:51.411712 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:39312.service - OpenSSH per-connection server daemon (10.0.0.1:39312). Sep 12 17:31:51.450293 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 39312 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:31:51.451964 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:51.455920 systemd-logind[1442]: New session 2 of user core. Sep 12 17:31:51.466003 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:31:51.519881 sshd[1575]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:51.530504 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:39312.service: Deactivated successfully. Sep 12 17:31:51.532280 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:31:51.533891 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:31:51.545102 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:39318.service - OpenSSH per-connection server daemon (10.0.0.1:39318). Sep 12 17:31:51.546037 systemd-logind[1442]: Removed session 2. Sep 12 17:31:51.576273 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 39318 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:31:51.577812 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:51.581517 systemd-logind[1442]: New session 3 of user core. Sep 12 17:31:51.601111 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:31:51.651913 sshd[1582]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:51.661605 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:39318.service: Deactivated successfully. Sep 12 17:31:51.663509 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:31:51.665042 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:31:51.676092 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:39320.service - OpenSSH per-connection server daemon (10.0.0.1:39320). Sep 12 17:31:51.676944 systemd-logind[1442]: Removed session 3. Sep 12 17:31:51.708057 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 39320 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:31:51.709550 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:51.713429 systemd-logind[1442]: New session 4 of user core. Sep 12 17:31:51.720007 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:31:51.774523 sshd[1589]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:51.792599 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:39320.service: Deactivated successfully. Sep 12 17:31:51.794755 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:31:51.796250 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:31:51.803281 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:39328.service - OpenSSH per-connection server daemon (10.0.0.1:39328). Sep 12 17:31:51.804931 systemd-logind[1442]: Removed session 4. Sep 12 17:31:51.837355 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 39328 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:31:51.838823 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:51.842704 systemd-logind[1442]: New session 5 of user core. Sep 12 17:31:51.857984 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:31:51.916380 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:31:51.916732 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:51.936553 sudo[1599]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:51.938453 sshd[1596]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:51.945365 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:39328.service: Deactivated successfully. Sep 12 17:31:51.947033 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:31:51.948353 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:31:51.949727 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:39344.service - OpenSSH per-connection server daemon (10.0.0.1:39344). Sep 12 17:31:51.950410 systemd-logind[1442]: Removed session 5. Sep 12 17:31:51.987872 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 39344 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:31:51.989519 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:51.993030 systemd-logind[1442]: New session 6 of user core. Sep 12 17:31:52.002977 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:31:52.056478 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:31:52.056836 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:52.060797 sudo[1608]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:52.066936 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:31:52.067274 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:52.087090 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:31:52.089494 auditctl[1611]: No rules Sep 12 17:31:52.090813 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:31:52.091087 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:31:52.092794 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:31:52.124994 augenrules[1629]: No rules Sep 12 17:31:52.126789 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:31:52.128237 sudo[1607]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:52.130235 sshd[1604]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:52.137369 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:39344.service: Deactivated successfully. Sep 12 17:31:52.138970 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:31:52.140289 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:31:52.141602 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:39346.service - OpenSSH per-connection server daemon (10.0.0.1:39346). Sep 12 17:31:52.142293 systemd-logind[1442]: Removed session 6. Sep 12 17:31:52.179892 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 39346 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:31:52.181373 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:52.185233 systemd-logind[1442]: New session 7 of user core. Sep 12 17:31:52.196069 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:31:52.249368 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:31:52.249718 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:52.784078 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:31:52.784278 (dockerd)[1658]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:31:53.297819 dockerd[1658]: time="2025-09-12T17:31:53.297733862Z" level=info msg="Starting up" Sep 12 17:31:53.744504 systemd[1]: var-lib-docker-metacopy\x2dcheck346405843-merged.mount: Deactivated successfully. Sep 12 17:31:53.771183 dockerd[1658]: time="2025-09-12T17:31:53.771130243Z" level=info msg="Loading containers: start." Sep 12 17:31:53.899891 kernel: Initializing XFRM netlink socket Sep 12 17:31:53.984758 systemd-networkd[1401]: docker0: Link UP Sep 12 17:31:54.008449 dockerd[1658]: time="2025-09-12T17:31:54.008369909Z" level=info msg="Loading containers: done." Sep 12 17:31:54.029136 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1833148900-merged.mount: Deactivated successfully. Sep 12 17:31:54.031689 dockerd[1658]: time="2025-09-12T17:31:54.031649259Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:31:54.031768 dockerd[1658]: time="2025-09-12T17:31:54.031747612Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:31:54.031886 dockerd[1658]: time="2025-09-12T17:31:54.031871715Z" level=info msg="Daemon has completed initialization" Sep 12 17:31:54.076711 dockerd[1658]: time="2025-09-12T17:31:54.076644852Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:31:54.076930 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:31:54.988918 containerd[1459]: time="2025-09-12T17:31:54.988838385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:31:55.643324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863898593.mount: Deactivated successfully. Sep 12 17:31:57.281078 containerd[1459]: time="2025-09-12T17:31:57.281007343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:57.281738 containerd[1459]: time="2025-09-12T17:31:57.281670875Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 17:31:57.282792 containerd[1459]: time="2025-09-12T17:31:57.282740846Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:57.285648 containerd[1459]: time="2025-09-12T17:31:57.285584548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:57.286677 containerd[1459]: time="2025-09-12T17:31:57.286645541Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.297738124s" Sep 12 17:31:57.286677 containerd[1459]: time="2025-09-12T17:31:57.286683484Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 17:31:57.287336 containerd[1459]: time="2025-09-12T17:31:57.287279103Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:31:59.171110 containerd[1459]: time="2025-09-12T17:31:59.171036898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:59.174625 containerd[1459]: time="2025-09-12T17:31:59.174563187Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 17:31:59.176649 containerd[1459]: time="2025-09-12T17:31:59.176603798Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:59.179344 containerd[1459]: time="2025-09-12T17:31:59.179309813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:59.180481 containerd[1459]: time="2025-09-12T17:31:59.180428731Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.893114357s" Sep 12 17:31:59.180556 containerd[1459]: time="2025-09-12T17:31:59.180479399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 17:31:59.181130 containerd[1459]: time="2025-09-12T17:31:59.181021575Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:31:59.557453 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:31:59.568033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:59.801171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:59.805966 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:32:00.527810 kubelet[1874]: E0912 17:32:00.527713 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:32:00.536113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:32:00.536387 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:32:02.470748 containerd[1459]: time="2025-09-12T17:32:02.470665810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:02.473292 containerd[1459]: time="2025-09-12T17:32:02.473246902Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 17:32:02.474497 containerd[1459]: time="2025-09-12T17:32:02.474431421Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:02.477443 containerd[1459]: time="2025-09-12T17:32:02.477396095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:02.478506 containerd[1459]: time="2025-09-12T17:32:02.478457544Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 3.297405539s" Sep 12 17:32:02.478506 containerd[1459]: time="2025-09-12T17:32:02.478498081Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 17:32:02.479267 containerd[1459]: time="2025-09-12T17:32:02.479232636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:32:04.185055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564239827.mount: Deactivated successfully. Sep 12 17:32:04.869029 containerd[1459]: time="2025-09-12T17:32:04.868956409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:04.873918 containerd[1459]: time="2025-09-12T17:32:04.873878607Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 17:32:04.875066 containerd[1459]: time="2025-09-12T17:32:04.875005475Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:04.877348 containerd[1459]: time="2025-09-12T17:32:04.877302231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:04.878035 containerd[1459]: time="2025-09-12T17:32:04.877993851Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.39871782s" Sep 12 17:32:04.878035 containerd[1459]: time="2025-09-12T17:32:04.878025283Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 17:32:04.878630 containerd[1459]: time="2025-09-12T17:32:04.878549219Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:32:05.525638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867741576.mount: Deactivated successfully. Sep 12 17:32:06.505018 containerd[1459]: time="2025-09-12T17:32:06.504958167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:06.505802 containerd[1459]: time="2025-09-12T17:32:06.505762546Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:32:06.507158 containerd[1459]: time="2025-09-12T17:32:06.507106118Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:06.509934 containerd[1459]: time="2025-09-12T17:32:06.509902459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:06.511052 containerd[1459]: time="2025-09-12T17:32:06.510999721Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.632365942s" Sep 12 17:32:06.511117 containerd[1459]: time="2025-09-12T17:32:06.511053760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:32:06.511700 containerd[1459]: time="2025-09-12T17:32:06.511661052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:32:07.093713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2851783175.mount: Deactivated successfully. Sep 12 17:32:07.100150 containerd[1459]: time="2025-09-12T17:32:07.100100509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:07.100914 containerd[1459]: time="2025-09-12T17:32:07.100873063Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:32:07.102110 containerd[1459]: time="2025-09-12T17:32:07.102046175Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:07.104427 containerd[1459]: time="2025-09-12T17:32:07.104386039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:07.105146 containerd[1459]: time="2025-09-12T17:32:07.105102106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 593.412502ms" Sep 12 17:32:07.105146 containerd[1459]: time="2025-09-12T17:32:07.105131809Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:32:07.105725 containerd[1459]: time="2025-09-12T17:32:07.105675384Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:32:07.736840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147644914.mount: Deactivated successfully. Sep 12 17:32:10.557534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:32:10.572022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:10.762207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:10.768074 (kubelet)[1980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:32:11.298977 kubelet[1980]: E0912 17:32:11.298892 1980 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:32:11.303534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:32:11.303778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:32:13.252296 containerd[1459]: time="2025-09-12T17:32:13.252214657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:13.294609 containerd[1459]: time="2025-09-12T17:32:13.294542413Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 17:32:13.337806 containerd[1459]: time="2025-09-12T17:32:13.337719659Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:13.368542 containerd[1459]: time="2025-09-12T17:32:13.368429107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:13.370125 containerd[1459]: time="2025-09-12T17:32:13.370056806Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.264331167s" Sep 12 17:32:13.370125 containerd[1459]: time="2025-09-12T17:32:13.370116052Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 17:32:16.040983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:16.053097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:16.079147 systemd[1]: Reloading requested from client PID 2051 ('systemctl') (unit session-7.scope)... Sep 12 17:32:16.079166 systemd[1]: Reloading... Sep 12 17:32:16.162902 zram_generator::config[2088]: No configuration found. Sep 12 17:32:16.573460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:32:16.652800 systemd[1]: Reloading finished in 573 ms. Sep 12 17:32:16.709694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:16.713014 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:16.715785 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:32:16.716120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:16.726256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:16.901027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:16.906254 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:32:16.963871 kubelet[2141]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:32:16.963871 kubelet[2141]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:32:16.963871 kubelet[2141]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:32:16.964355 kubelet[2141]: I0912 17:32:16.963974 2141 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:32:17.307735 kubelet[2141]: I0912 17:32:17.307680 2141 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:32:17.307735 kubelet[2141]: I0912 17:32:17.307713 2141 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:32:17.308147 kubelet[2141]: I0912 17:32:17.308104 2141 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:32:17.339377 kubelet[2141]: I0912 17:32:17.339328 2141 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:32:17.339443 kubelet[2141]: E0912 17:32:17.339360 2141 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:17.350904 kubelet[2141]: E0912 17:32:17.350840 2141 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:32:17.350904 kubelet[2141]: I0912 17:32:17.350898 2141 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:32:17.357236 kubelet[2141]: I0912 17:32:17.357194 2141 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:32:17.359865 kubelet[2141]: I0912 17:32:17.359788 2141 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:32:17.360192 kubelet[2141]: I0912 17:32:17.359865 2141 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:32:17.360293 kubelet[2141]: I0912 17:32:17.360212 2141 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:32:17.360293 kubelet[2141]: I0912 17:32:17.360226 2141 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:32:17.360483 kubelet[2141]: I0912 17:32:17.360460 2141 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:32:17.364955 kubelet[2141]: I0912 17:32:17.364920 2141 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:32:17.390651 kubelet[2141]: I0912 17:32:17.390621 2141 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:32:17.390711 kubelet[2141]: I0912 17:32:17.390660 2141 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:32:17.390711 kubelet[2141]: I0912 17:32:17.390676 2141 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:32:17.393434 kubelet[2141]: W0912 17:32:17.393226 2141 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Sep 12 17:32:17.393434 kubelet[2141]: E0912 17:32:17.393308 2141 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:17.393434 kubelet[2141]: W0912 17:32:17.393330 2141 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Sep 12 17:32:17.393434 kubelet[2141]: E0912 17:32:17.393396 2141 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:17.394113 kubelet[2141]: I0912 17:32:17.394083 2141 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:32:17.394495 kubelet[2141]: I0912 17:32:17.394470 2141 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:32:17.395152 kubelet[2141]: W0912 17:32:17.395125 2141 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:32:17.397494 kubelet[2141]: I0912 17:32:17.397462 2141 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:32:17.397544 kubelet[2141]: I0912 17:32:17.397511 2141 server.go:1287] "Started kubelet" Sep 12 17:32:17.401067 kubelet[2141]: I0912 17:32:17.401033 2141 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:32:17.401650 kubelet[2141]: I0912 17:32:17.401618 2141 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:32:17.401871 kubelet[2141]: I0912 17:32:17.401781 2141 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:32:17.402227 kubelet[2141]: I0912 17:32:17.402202 2141 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:32:17.402806 kubelet[2141]: I0912 17:32:17.402626 2141 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:32:17.402806 kubelet[2141]: I0912 17:32:17.402731 2141 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:32:17.407529 kubelet[2141]: E0912 17:32:17.407501 2141 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:32:17.407639 kubelet[2141]: I0912 17:32:17.407622 2141 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:32:17.407888 kubelet[2141]: I0912 17:32:17.407821 2141 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:32:17.408436 kubelet[2141]: W0912 17:32:17.408388 2141 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Sep 12 17:32:17.408478 kubelet[2141]: E0912 17:32:17.408438 2141 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:17.408673 kubelet[2141]: I0912 17:32:17.408637 2141 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:32:17.408765 kubelet[2141]: I0912 17:32:17.408740 2141 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:32:17.409824 kubelet[2141]: E0912 17:32:17.409781 2141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Sep 12 17:32:17.410023 kubelet[2141]: E0912 17:32:17.410003 2141 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:32:17.410968 kubelet[2141]: I0912 17:32:17.410941 2141 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:32:17.411115 kubelet[2141]: I0912 17:32:17.411098 2141 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:32:17.412787 kubelet[2141]: E0912 17:32:17.410430 2141 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186499533edf2af4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:32:17.397484276 +0000 UTC m=+0.482645324,LastTimestamp:2025-09-12 17:32:17.397484276 +0000 UTC m=+0.482645324,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:32:17.427978 kubelet[2141]: I0912 17:32:17.427929 2141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:32:17.430458 kubelet[2141]: I0912 17:32:17.429309 2141 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:32:17.430458 kubelet[2141]: I0912 17:32:17.429331 2141 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:32:17.430458 kubelet[2141]: I0912 17:32:17.429367 2141 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:32:17.430979 kubelet[2141]: I0912 17:32:17.430953 2141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:32:17.431044 kubelet[2141]: I0912 17:32:17.430989 2141 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:32:17.431080 kubelet[2141]: I0912 17:32:17.431052 2141 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:32:17.431080 kubelet[2141]: I0912 17:32:17.431062 2141 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:32:17.431215 kubelet[2141]: E0912 17:32:17.431192 2141 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:32:17.432508 kubelet[2141]: W0912 17:32:17.431908 2141 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Sep 12 17:32:17.432508 kubelet[2141]: E0912 17:32:17.432007 2141 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:17.435624 kubelet[2141]: I0912 17:32:17.435587 2141 policy_none.go:49] "None policy: Start" Sep 12 17:32:17.435676 kubelet[2141]: I0912 17:32:17.435630 2141 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:32:17.435676 kubelet[2141]: I0912 17:32:17.435652 2141 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:32:17.443234 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:32:17.457425 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:32:17.460620 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:32:17.475761 kubelet[2141]: I0912 17:32:17.475736 2141 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:32:17.476185 kubelet[2141]: I0912 17:32:17.476152 2141 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:32:17.476252 kubelet[2141]: I0912 17:32:17.476173 2141 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:32:17.476551 kubelet[2141]: I0912 17:32:17.476534 2141 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:32:17.477705 kubelet[2141]: E0912 17:32:17.477619 2141 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:32:17.477705 kubelet[2141]: E0912 17:32:17.477687 2141 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:32:17.545188 systemd[1]: Created slice kubepods-burstable-pod7473f4d2122347b5386240e53c159f0a.slice - libcontainer container kubepods-burstable-pod7473f4d2122347b5386240e53c159f0a.slice. Sep 12 17:32:17.555838 kubelet[2141]: E0912 17:32:17.555776 2141 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:17.559535 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 17:32:17.563671 kubelet[2141]: E0912 17:32:17.563631 2141 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:17.565637 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 17:32:17.567450 kubelet[2141]: E0912 17:32:17.567420 2141 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:17.577792 kubelet[2141]: I0912 17:32:17.577752 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:17.578340 kubelet[2141]: E0912 17:32:17.578300 2141 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Sep 12 17:32:17.611102 kubelet[2141]: E0912 17:32:17.611060 2141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Sep 12 17:32:17.612193 kubelet[2141]: I0912 17:32:17.612165 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:17.612255 kubelet[2141]: I0912 17:32:17.612200 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7473f4d2122347b5386240e53c159f0a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7473f4d2122347b5386240e53c159f0a\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:17.612255 kubelet[2141]: I0912 17:32:17.612226 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7473f4d2122347b5386240e53c159f0a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7473f4d2122347b5386240e53c159f0a\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:17.612255 kubelet[2141]: I0912 17:32:17.612247 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:17.612327 kubelet[2141]: I0912 17:32:17.612270 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:17.612327 kubelet[2141]: I0912 17:32:17.612307 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7473f4d2122347b5386240e53c159f0a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7473f4d2122347b5386240e53c159f0a\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:17.612376 kubelet[2141]: I0912 17:32:17.612329 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:17.612376 kubelet[2141]: I0912 17:32:17.612352 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:17.612415 kubelet[2141]: I0912 17:32:17.612373 2141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:17.779685 kubelet[2141]: I0912 17:32:17.779655 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:17.779980 kubelet[2141]: E0912 17:32:17.779935 2141 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Sep 12 17:32:17.857083 kubelet[2141]: E0912 17:32:17.856997 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:17.858268 containerd[1459]: time="2025-09-12T17:32:17.858215790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7473f4d2122347b5386240e53c159f0a,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:17.864469 kubelet[2141]: E0912 17:32:17.864441 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:17.865734 containerd[1459]: time="2025-09-12T17:32:17.865328519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:17.868654 kubelet[2141]: E0912 17:32:17.868633 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:17.869112 containerd[1459]: time="2025-09-12T17:32:17.869074330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:18.012309 kubelet[2141]: E0912 17:32:18.012236 2141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Sep 12 17:32:18.182045 kubelet[2141]: I0912 17:32:18.181892 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:18.182296 kubelet[2141]: E0912 17:32:18.182249 2141 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Sep 12 17:32:18.327018 kubelet[2141]: W0912 17:32:18.326964 2141 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Sep 12 17:32:18.327018 kubelet[2141]: E0912 17:32:18.327000 2141 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:18.738974 kubelet[2141]: W0912 17:32:18.738893 2141 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Sep 12 17:32:18.738974 kubelet[2141]: E0912 17:32:18.738976 2141 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:18.783685 kubelet[2141]: W0912 17:32:18.783640 2141 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Sep 12 17:32:18.783786 kubelet[2141]: E0912 17:32:18.783684 2141 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:18.813495 kubelet[2141]: E0912 17:32:18.813449 2141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" Sep 12 17:32:18.912135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795931165.mount: Deactivated successfully. Sep 12 17:32:18.923021 containerd[1459]: time="2025-09-12T17:32:18.922983768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:32:18.924009 containerd[1459]: time="2025-09-12T17:32:18.923981550Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:32:18.924787 containerd[1459]: time="2025-09-12T17:32:18.924748574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:32:18.925708 containerd[1459]: time="2025-09-12T17:32:18.925682592Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:32:18.928009 containerd[1459]: time="2025-09-12T17:32:18.927977505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:32:18.928913 containerd[1459]: time="2025-09-12T17:32:18.928886108Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:32:18.929770 containerd[1459]: time="2025-09-12T17:32:18.929730807Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:32:18.932267 containerd[1459]: time="2025-09-12T17:32:18.932232666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:32:18.934151 containerd[1459]: time="2025-09-12T17:32:18.934110541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.068676306s" Sep 12 17:32:18.934812 containerd[1459]: time="2025-09-12T17:32:18.934780662Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.065637145s" Sep 12 17:32:18.935412 containerd[1459]: time="2025-09-12T17:32:18.935383813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.077082649s" Sep 12 17:32:18.972320 kubelet[2141]: W0912 17:32:18.969497 2141 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Sep 12 17:32:18.972320 kubelet[2141]: E0912 17:32:18.972191 2141 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:18.983686 kubelet[2141]: I0912 17:32:18.983656 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:18.984047 kubelet[2141]: E0912 17:32:18.984014 2141 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Sep 12 17:32:19.116117 containerd[1459]: time="2025-09-12T17:32:19.115984187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:19.116117 containerd[1459]: time="2025-09-12T17:32:19.116038731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:19.116117 containerd[1459]: time="2025-09-12T17:32:19.116048341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:19.116396 containerd[1459]: time="2025-09-12T17:32:19.116139442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:19.121331 containerd[1459]: time="2025-09-12T17:32:19.121253637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:19.121434 containerd[1459]: time="2025-09-12T17:32:19.121326319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:19.121434 containerd[1459]: time="2025-09-12T17:32:19.121341160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:19.121560 containerd[1459]: time="2025-09-12T17:32:19.121498239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:19.197233 systemd[1]: Started cri-containerd-cea1162aa9a4d70e378cd2a00858ff031ac6fa69a03d9a9017a4bce05054d35d.scope - libcontainer container cea1162aa9a4d70e378cd2a00858ff031ac6fa69a03d9a9017a4bce05054d35d. Sep 12 17:32:19.199682 systemd[1]: Started cri-containerd-db07600249da34e937f5c4fb853398af97f87db6177f5bf9ffff281f6b70776b.scope - libcontainer container db07600249da34e937f5c4fb853398af97f87db6177f5bf9ffff281f6b70776b. Sep 12 17:32:19.214016 containerd[1459]: time="2025-09-12T17:32:19.213651720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:19.214016 containerd[1459]: time="2025-09-12T17:32:19.213731779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:19.214016 containerd[1459]: time="2025-09-12T17:32:19.213743773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:19.214016 containerd[1459]: time="2025-09-12T17:32:19.213826838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:19.243266 systemd[1]: Started cri-containerd-6624e48465967d41b2193fa0742647c11caed0557defb59281a10d1a5df37ba7.scope - libcontainer container 6624e48465967d41b2193fa0742647c11caed0557defb59281a10d1a5df37ba7. Sep 12 17:32:19.255459 containerd[1459]: time="2025-09-12T17:32:19.255403564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cea1162aa9a4d70e378cd2a00858ff031ac6fa69a03d9a9017a4bce05054d35d\"" Sep 12 17:32:19.257557 kubelet[2141]: E0912 17:32:19.257490 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:19.260265 containerd[1459]: time="2025-09-12T17:32:19.260229385Z" level=info msg="CreateContainer within sandbox \"cea1162aa9a4d70e378cd2a00858ff031ac6fa69a03d9a9017a4bce05054d35d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:32:19.263706 containerd[1459]: time="2025-09-12T17:32:19.262903432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7473f4d2122347b5386240e53c159f0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"db07600249da34e937f5c4fb853398af97f87db6177f5bf9ffff281f6b70776b\"" Sep 12 17:32:19.265838 kubelet[2141]: E0912 17:32:19.265802 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:19.269931 containerd[1459]: time="2025-09-12T17:32:19.269876267Z" level=info msg="CreateContainer within sandbox \"db07600249da34e937f5c4fb853398af97f87db6177f5bf9ffff281f6b70776b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:32:19.292982 containerd[1459]: time="2025-09-12T17:32:19.292913748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"6624e48465967d41b2193fa0742647c11caed0557defb59281a10d1a5df37ba7\"" Sep 12 17:32:19.294204 kubelet[2141]: E0912 17:32:19.293945 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:19.294279 containerd[1459]: time="2025-09-12T17:32:19.294096384Z" level=info msg="CreateContainer within sandbox \"cea1162aa9a4d70e378cd2a00858ff031ac6fa69a03d9a9017a4bce05054d35d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e62d2c51e92ff03804ffb0ae684450574f5336489afbcf6fa1628477515ecea\"" Sep 12 17:32:19.295240 containerd[1459]: time="2025-09-12T17:32:19.294967958Z" level=info msg="StartContainer for \"0e62d2c51e92ff03804ffb0ae684450574f5336489afbcf6fa1628477515ecea\"" Sep 12 17:32:19.295606 containerd[1459]: time="2025-09-12T17:32:19.295398320Z" level=info msg="CreateContainer within sandbox \"6624e48465967d41b2193fa0742647c11caed0557defb59281a10d1a5df37ba7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:32:19.300073 containerd[1459]: time="2025-09-12T17:32:19.299814854Z" level=info msg="CreateContainer within sandbox \"db07600249da34e937f5c4fb853398af97f87db6177f5bf9ffff281f6b70776b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0587c06cb353f7c753b76ab056b757eedadb1c9d4c836a9c6502f501702087d8\"" Sep 12 17:32:19.300335 containerd[1459]: time="2025-09-12T17:32:19.300276350Z" level=info msg="StartContainer for \"0587c06cb353f7c753b76ab056b757eedadb1c9d4c836a9c6502f501702087d8\"" Sep 12 17:32:19.311614 containerd[1459]: time="2025-09-12T17:32:19.311492256Z" level=info msg="CreateContainer within sandbox \"6624e48465967d41b2193fa0742647c11caed0557defb59281a10d1a5df37ba7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ed4c6a14e791bd8e69188dc06413046d61122b2f81c47b0f479fc50a805d48f\"" Sep 12 17:32:19.312114 containerd[1459]: time="2025-09-12T17:32:19.312091240Z" level=info msg="StartContainer for \"0ed4c6a14e791bd8e69188dc06413046d61122b2f81c47b0f479fc50a805d48f\"" Sep 12 17:32:19.333159 systemd[1]: Started cri-containerd-0587c06cb353f7c753b76ab056b757eedadb1c9d4c836a9c6502f501702087d8.scope - libcontainer container 0587c06cb353f7c753b76ab056b757eedadb1c9d4c836a9c6502f501702087d8. Sep 12 17:32:19.335138 systemd[1]: Started cri-containerd-0e62d2c51e92ff03804ffb0ae684450574f5336489afbcf6fa1628477515ecea.scope - libcontainer container 0e62d2c51e92ff03804ffb0ae684450574f5336489afbcf6fa1628477515ecea. Sep 12 17:32:19.344511 systemd[1]: Started cri-containerd-0ed4c6a14e791bd8e69188dc06413046d61122b2f81c47b0f479fc50a805d48f.scope - libcontainer container 0ed4c6a14e791bd8e69188dc06413046d61122b2f81c47b0f479fc50a805d48f. Sep 12 17:32:19.423241 containerd[1459]: time="2025-09-12T17:32:19.423020413Z" level=info msg="StartContainer for \"0e62d2c51e92ff03804ffb0ae684450574f5336489afbcf6fa1628477515ecea\" returns successfully" Sep 12 17:32:19.423241 containerd[1459]: time="2025-09-12T17:32:19.423195439Z" level=info msg="StartContainer for \"0587c06cb353f7c753b76ab056b757eedadb1c9d4c836a9c6502f501702087d8\" returns successfully" Sep 12 17:32:19.423241 containerd[1459]: time="2025-09-12T17:32:19.423229760Z" level=info msg="StartContainer for \"0ed4c6a14e791bd8e69188dc06413046d61122b2f81c47b0f479fc50a805d48f\" returns successfully" Sep 12 17:32:19.439340 kubelet[2141]: E0912 17:32:19.439288 2141 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:19.439471 kubelet[2141]: E0912 17:32:19.439452 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:19.445101 kubelet[2141]: E0912 17:32:19.445055 2141 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:19.445224 kubelet[2141]: E0912 17:32:19.445192 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:19.447670 kubelet[2141]: E0912 17:32:19.447642 2141 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:19.447800 kubelet[2141]: E0912 17:32:19.447780 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:19.476214 kubelet[2141]: E0912 17:32:19.476166 2141 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:20.449408 kubelet[2141]: E0912 17:32:20.449008 2141 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:20.449408 kubelet[2141]: E0912 17:32:20.449154 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:20.449408 kubelet[2141]: E0912 17:32:20.449186 2141 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:20.449408 kubelet[2141]: E0912 17:32:20.449311 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:20.449408 kubelet[2141]: E0912 17:32:20.449366 2141 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:20.449953 kubelet[2141]: E0912 17:32:20.449438 2141 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:20.585816 kubelet[2141]: I0912 17:32:20.585764 2141 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:20.636156 kubelet[2141]: E0912 17:32:20.636095 2141 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:32:20.721185 kubelet[2141]: I0912 17:32:20.721047 2141 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:32:20.813715 kubelet[2141]: I0912 17:32:20.813525 2141 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:20.818986 kubelet[2141]: E0912 17:32:20.818961 2141 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:20.818986 kubelet[2141]: I0912 17:32:20.818980 2141 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:20.820576 kubelet[2141]: E0912 17:32:20.820535 2141 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:20.820576 kubelet[2141]: I0912 17:32:20.820575 2141 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:20.822362 kubelet[2141]: E0912 17:32:20.822338 2141 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:21.392717 kubelet[2141]: I0912 17:32:21.392659 2141 apiserver.go:52] "Watching apiserver" Sep 12 17:32:21.408640 kubelet[2141]: I0912 17:32:21.408595 2141 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:32:22.862977 systemd[1]: Reloading requested from client PID 2420 ('systemctl') (unit session-7.scope)... Sep 12 17:32:22.862997 systemd[1]: Reloading... Sep 12 17:32:22.955893 zram_generator::config[2465]: No configuration found. Sep 12 17:32:23.061138 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:32:23.153362 systemd[1]: Reloading finished in 289 ms. Sep 12 17:32:23.198977 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:23.224195 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:32:23.224481 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:23.224532 systemd[1]: kubelet.service: Consumed 1.155s CPU time, 133.9M memory peak, 0B memory swap peak. Sep 12 17:32:23.235178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:23.446356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:23.453214 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:32:23.500220 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:32:23.500220 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:32:23.500220 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:32:23.500666 kubelet[2504]: I0912 17:32:23.500564 2504 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:32:23.508410 kubelet[2504]: I0912 17:32:23.508368 2504 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:32:23.508410 kubelet[2504]: I0912 17:32:23.508397 2504 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:32:23.508688 kubelet[2504]: I0912 17:32:23.508668 2504 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:32:23.509989 kubelet[2504]: I0912 17:32:23.509969 2504 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:32:23.511990 kubelet[2504]: I0912 17:32:23.511954 2504 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:32:23.514686 kubelet[2504]: E0912 17:32:23.514652 2504 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:32:23.514686 kubelet[2504]: I0912 17:32:23.514685 2504 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:32:23.520134 kubelet[2504]: I0912 17:32:23.519621 2504 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:32:23.520134 kubelet[2504]: I0912 17:32:23.519860 2504 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:32:23.520314 kubelet[2504]: I0912 17:32:23.519899 2504 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:32:23.520397 kubelet[2504]: I0912 17:32:23.520339 2504 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:32:23.520397 kubelet[2504]: I0912 17:32:23.520354 2504 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:32:23.520449 kubelet[2504]: I0912 17:32:23.520430 2504 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:32:23.520646 kubelet[2504]: I0912 17:32:23.520631 2504 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:32:23.520674 kubelet[2504]: I0912 17:32:23.520659 2504 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:32:23.520700 kubelet[2504]: I0912 17:32:23.520680 2504 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:32:23.520700 kubelet[2504]: I0912 17:32:23.520691 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:32:23.522145 kubelet[2504]: I0912 17:32:23.522112 2504 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:32:23.522910 kubelet[2504]: I0912 17:32:23.522734 2504 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:32:23.523218 kubelet[2504]: I0912 17:32:23.523198 2504 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:32:23.523258 kubelet[2504]: I0912 17:32:23.523226 2504 server.go:1287] "Started kubelet" Sep 12 17:32:23.525837 kubelet[2504]: I0912 17:32:23.524270 2504 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:32:23.527882 kubelet[2504]: I0912 17:32:23.525888 2504 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:32:23.527882 kubelet[2504]: I0912 17:32:23.526141 2504 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:32:23.527882 kubelet[2504]: I0912 17:32:23.527267 2504 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:32:23.530176 kubelet[2504]: E0912 17:32:23.530150 2504 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:32:23.531056 kubelet[2504]: I0912 17:32:23.530891 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:32:23.531660 kubelet[2504]: I0912 17:32:23.531632 2504 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:32:23.531757 kubelet[2504]: I0912 17:32:23.531733 2504 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:32:23.531905 kubelet[2504]: I0912 17:32:23.531888 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:32:23.533780 kubelet[2504]: I0912 17:32:23.533676 2504 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:32:23.534063 kubelet[2504]: I0912 17:32:23.533975 2504 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:32:23.534452 kubelet[2504]: I0912 17:32:23.533711 2504 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:32:23.536328 kubelet[2504]: I0912 17:32:23.536301 2504 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:32:23.549745 kubelet[2504]: I0912 17:32:23.549702 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:32:23.551289 kubelet[2504]: I0912 17:32:23.551269 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:32:23.551356 kubelet[2504]: I0912 17:32:23.551310 2504 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:32:23.551356 kubelet[2504]: I0912 17:32:23.551330 2504 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:32:23.551356 kubelet[2504]: I0912 17:32:23.551337 2504 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:32:23.552755 kubelet[2504]: E0912 17:32:23.552729 2504 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:32:23.575001 kubelet[2504]: I0912 17:32:23.574970 2504 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:32:23.575001 kubelet[2504]: I0912 17:32:23.574988 2504 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:32:23.575001 kubelet[2504]: I0912 17:32:23.575007 2504 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:32:23.575192 kubelet[2504]: I0912 17:32:23.575152 2504 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:32:23.575192 kubelet[2504]: I0912 17:32:23.575162 2504 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:32:23.575192 kubelet[2504]: I0912 17:32:23.575180 2504 policy_none.go:49] "None policy: Start" Sep 12 17:32:23.575192 kubelet[2504]: I0912 17:32:23.575188 2504 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:32:23.575274 kubelet[2504]: I0912 17:32:23.575198 2504 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:32:23.575306 kubelet[2504]: I0912 17:32:23.575294 2504 state_mem.go:75] "Updated machine memory state" Sep 12 17:32:23.579011 kubelet[2504]: I0912 17:32:23.578908 2504 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:32:23.579092 kubelet[2504]: I0912 17:32:23.579075 2504 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:32:23.579135 kubelet[2504]: I0912 17:32:23.579092 2504 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:32:23.579286 kubelet[2504]: I0912 17:32:23.579263 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:32:23.580279 kubelet[2504]: E0912 17:32:23.580257 2504 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:32:23.653824 kubelet[2504]: I0912 17:32:23.653789 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:23.653977 kubelet[2504]: I0912 17:32:23.653798 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:23.654039 kubelet[2504]: I0912 17:32:23.653803 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:23.689792 kubelet[2504]: I0912 17:32:23.689774 2504 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:23.735832 kubelet[2504]: I0912 17:32:23.735744 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:23.735832 kubelet[2504]: I0912 17:32:23.735768 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7473f4d2122347b5386240e53c159f0a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7473f4d2122347b5386240e53c159f0a\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:23.735832 kubelet[2504]: I0912 17:32:23.735784 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7473f4d2122347b5386240e53c159f0a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7473f4d2122347b5386240e53c159f0a\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:23.735832 kubelet[2504]: I0912 17:32:23.735801 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7473f4d2122347b5386240e53c159f0a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7473f4d2122347b5386240e53c159f0a\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:23.735832 kubelet[2504]: I0912 17:32:23.735819 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:23.735994 kubelet[2504]: I0912 17:32:23.735833 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:23.735994 kubelet[2504]: I0912 17:32:23.735858 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:23.735994 kubelet[2504]: I0912 17:32:23.735892 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:23.735994 kubelet[2504]: I0912 17:32:23.735920 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:23.934313 kubelet[2504]: E0912 17:32:23.932700 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:23.934313 kubelet[2504]: E0912 17:32:23.933039 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:23.934313 kubelet[2504]: E0912 17:32:23.933196 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:24.279565 kubelet[2504]: I0912 17:32:24.278639 2504 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:32:24.279565 kubelet[2504]: I0912 17:32:24.278727 2504 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:32:24.521554 kubelet[2504]: I0912 17:32:24.521490 2504 apiserver.go:52] "Watching apiserver" Sep 12 17:32:24.532255 kubelet[2504]: I0912 17:32:24.532143 2504 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:32:24.563880 kubelet[2504]: I0912 17:32:24.563834 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:24.564140 kubelet[2504]: I0912 17:32:24.564105 2504 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:24.564363 kubelet[2504]: E0912 17:32:24.564339 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:24.843629 kubelet[2504]: E0912 17:32:24.842955 2504 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:24.843629 kubelet[2504]: E0912 17:32:24.842988 2504 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:24.843629 kubelet[2504]: E0912 17:32:24.843136 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:24.843629 kubelet[2504]: E0912 17:32:24.843250 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:25.052786 kubelet[2504]: I0912 17:32:25.052177 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.052145646 podStartE2EDuration="2.052145646s" podCreationTimestamp="2025-09-12 17:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:24.843010984 +0000 UTC m=+1.384965169" watchObservedRunningTime="2025-09-12 17:32:25.052145646 +0000 UTC m=+1.594099911" Sep 12 17:32:25.064298 kubelet[2504]: I0912 17:32:25.064234 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.064209663 podStartE2EDuration="2.064209663s" podCreationTimestamp="2025-09-12 17:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:25.05236916 +0000 UTC m=+1.594323345" watchObservedRunningTime="2025-09-12 17:32:25.064209663 +0000 UTC m=+1.606163848" Sep 12 17:32:25.071982 kubelet[2504]: I0912 17:32:25.071920 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.071897239 podStartE2EDuration="2.071897239s" podCreationTimestamp="2025-09-12 17:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:25.064472907 +0000 UTC m=+1.606427102" watchObservedRunningTime="2025-09-12 17:32:25.071897239 +0000 UTC m=+1.613851424" Sep 12 17:32:25.565198 kubelet[2504]: E0912 17:32:25.565162 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:25.565746 kubelet[2504]: E0912 17:32:25.565259 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:27.694840 kubelet[2504]: E0912 17:32:27.694799 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:28.959348 systemd[1]: Created slice kubepods-besteffort-pod4e6d8cda_1c18_4ce2_8a9d_4361190c75e6.slice - libcontainer container kubepods-besteffort-pod4e6d8cda_1c18_4ce2_8a9d_4361190c75e6.slice. Sep 12 17:32:28.967797 kubelet[2504]: I0912 17:32:28.967767 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e6d8cda-1c18-4ce2-8a9d-4361190c75e6-kube-proxy\") pod \"kube-proxy-rm8qw\" (UID: \"4e6d8cda-1c18-4ce2-8a9d-4361190c75e6\") " pod="kube-system/kube-proxy-rm8qw" Sep 12 17:32:28.967797 kubelet[2504]: I0912 17:32:28.967797 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e6d8cda-1c18-4ce2-8a9d-4361190c75e6-xtables-lock\") pod \"kube-proxy-rm8qw\" (UID: \"4e6d8cda-1c18-4ce2-8a9d-4361190c75e6\") " pod="kube-system/kube-proxy-rm8qw" Sep 12 17:32:28.968227 kubelet[2504]: I0912 17:32:28.967824 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4sz5\" (UniqueName: \"kubernetes.io/projected/4e6d8cda-1c18-4ce2-8a9d-4361190c75e6-kube-api-access-t4sz5\") pod \"kube-proxy-rm8qw\" (UID: \"4e6d8cda-1c18-4ce2-8a9d-4361190c75e6\") " pod="kube-system/kube-proxy-rm8qw" Sep 12 17:32:28.968227 kubelet[2504]: I0912 17:32:28.967842 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e6d8cda-1c18-4ce2-8a9d-4361190c75e6-lib-modules\") pod \"kube-proxy-rm8qw\" (UID: \"4e6d8cda-1c18-4ce2-8a9d-4361190c75e6\") " pod="kube-system/kube-proxy-rm8qw" Sep 12 17:32:29.008910 kubelet[2504]: I0912 17:32:29.008803 2504 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:32:29.009343 containerd[1459]: time="2025-09-12T17:32:29.009287841Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:32:29.010010 kubelet[2504]: I0912 17:32:29.009986 2504 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:32:29.073789 kubelet[2504]: E0912 17:32:29.073743 2504 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 17:32:29.073789 kubelet[2504]: E0912 17:32:29.073785 2504 projected.go:194] Error preparing data for projected volume kube-api-access-t4sz5 for pod kube-system/kube-proxy-rm8qw: configmap "kube-root-ca.crt" not found Sep 12 17:32:29.073969 kubelet[2504]: E0912 17:32:29.073879 2504 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e6d8cda-1c18-4ce2-8a9d-4361190c75e6-kube-api-access-t4sz5 podName:4e6d8cda-1c18-4ce2-8a9d-4361190c75e6 nodeName:}" failed. No retries permitted until 2025-09-12 17:32:29.573839901 +0000 UTC m=+6.115794086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t4sz5" (UniqueName: "kubernetes.io/projected/4e6d8cda-1c18-4ce2-8a9d-4361190c75e6-kube-api-access-t4sz5") pod "kube-proxy-rm8qw" (UID: "4e6d8cda-1c18-4ce2-8a9d-4361190c75e6") : configmap "kube-root-ca.crt" not found Sep 12 17:32:29.869166 kubelet[2504]: E0912 17:32:29.869116 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:29.869957 containerd[1459]: time="2025-09-12T17:32:29.869880420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rm8qw,Uid:4e6d8cda-1c18-4ce2-8a9d-4361190c75e6,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:30.267660 containerd[1459]: time="2025-09-12T17:32:30.267267385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:30.267660 containerd[1459]: time="2025-09-12T17:32:30.267387205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:30.267660 containerd[1459]: time="2025-09-12T17:32:30.267405050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:30.267660 containerd[1459]: time="2025-09-12T17:32:30.267528366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:30.307722 systemd[1]: Started cri-containerd-28134b075ba4a263fb855eca6bb8d698cfd7a5e7140099ba9ffceee4d06d6ed8.scope - libcontainer container 28134b075ba4a263fb855eca6bb8d698cfd7a5e7140099ba9ffceee4d06d6ed8. Sep 12 17:32:30.334115 containerd[1459]: time="2025-09-12T17:32:30.334072312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rm8qw,Uid:4e6d8cda-1c18-4ce2-8a9d-4361190c75e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"28134b075ba4a263fb855eca6bb8d698cfd7a5e7140099ba9ffceee4d06d6ed8\"" Sep 12 17:32:30.334792 kubelet[2504]: E0912 17:32:30.334768 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:30.337180 containerd[1459]: time="2025-09-12T17:32:30.337147017Z" level=info msg="CreateContainer within sandbox \"28134b075ba4a263fb855eca6bb8d698cfd7a5e7140099ba9ffceee4d06d6ed8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:32:30.460512 systemd[1]: Created slice kubepods-besteffort-pod8879f073_6c82_4376_a4cc_a682da7f1dd2.slice - libcontainer container kubepods-besteffort-pod8879f073_6c82_4376_a4cc_a682da7f1dd2.slice. Sep 12 17:32:30.469441 containerd[1459]: time="2025-09-12T17:32:30.469396098Z" level=info msg="CreateContainer within sandbox \"28134b075ba4a263fb855eca6bb8d698cfd7a5e7140099ba9ffceee4d06d6ed8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5efeeb2be13174e3f9b779accc8458407fd801215691036f8d90c7991a71400a\"" Sep 12 17:32:30.470091 containerd[1459]: time="2025-09-12T17:32:30.470045213Z" level=info msg="StartContainer for \"5efeeb2be13174e3f9b779accc8458407fd801215691036f8d90c7991a71400a\"" Sep 12 17:32:30.480548 kubelet[2504]: I0912 17:32:30.480419 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8879f073-6c82-4376-a4cc-a682da7f1dd2-var-lib-calico\") pod \"tigera-operator-755d956888-lxc5c\" (UID: \"8879f073-6c82-4376-a4cc-a682da7f1dd2\") " pod="tigera-operator/tigera-operator-755d956888-lxc5c" Sep 12 17:32:30.480548 kubelet[2504]: I0912 17:32:30.480469 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bmzd\" (UniqueName: \"kubernetes.io/projected/8879f073-6c82-4376-a4cc-a682da7f1dd2-kube-api-access-6bmzd\") pod \"tigera-operator-755d956888-lxc5c\" (UID: \"8879f073-6c82-4376-a4cc-a682da7f1dd2\") " pod="tigera-operator/tigera-operator-755d956888-lxc5c" Sep 12 17:32:30.505021 systemd[1]: Started cri-containerd-5efeeb2be13174e3f9b779accc8458407fd801215691036f8d90c7991a71400a.scope - libcontainer container 5efeeb2be13174e3f9b779accc8458407fd801215691036f8d90c7991a71400a. Sep 12 17:32:30.538185 containerd[1459]: time="2025-09-12T17:32:30.538065665Z" level=info msg="StartContainer for \"5efeeb2be13174e3f9b779accc8458407fd801215691036f8d90c7991a71400a\" returns successfully" Sep 12 17:32:30.574050 kubelet[2504]: E0912 17:32:30.574015 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:30.584961 kubelet[2504]: I0912 17:32:30.584345 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rm8qw" podStartSLOduration=2.584327462 podStartE2EDuration="2.584327462s" podCreationTimestamp="2025-09-12 17:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:30.584046641 +0000 UTC m=+7.126000826" watchObservedRunningTime="2025-09-12 17:32:30.584327462 +0000 UTC m=+7.126281647" Sep 12 17:32:30.764331 containerd[1459]: time="2025-09-12T17:32:30.764288913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-lxc5c,Uid:8879f073-6c82-4376-a4cc-a682da7f1dd2,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:32:30.791144 containerd[1459]: time="2025-09-12T17:32:30.790709731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:30.791144 containerd[1459]: time="2025-09-12T17:32:30.790770533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:30.791144 containerd[1459]: time="2025-09-12T17:32:30.790784600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:30.791394 containerd[1459]: time="2025-09-12T17:32:30.791036814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:30.818159 systemd[1]: Started cri-containerd-1b3e646b5a0282bf41a455903b3c631d396e504d14f5c4e77403fceeb61c9531.scope - libcontainer container 1b3e646b5a0282bf41a455903b3c631d396e504d14f5c4e77403fceeb61c9531. Sep 12 17:32:30.861052 containerd[1459]: time="2025-09-12T17:32:30.861005684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-lxc5c,Uid:8879f073-6c82-4376-a4cc-a682da7f1dd2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b3e646b5a0282bf41a455903b3c631d396e504d14f5c4e77403fceeb61c9531\"" Sep 12 17:32:30.863025 containerd[1459]: time="2025-09-12T17:32:30.862980434Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:32:31.705058 update_engine[1445]: I20250912 17:32:31.704975 1445 update_attempter.cc:509] Updating boot flags... Sep 12 17:32:31.769212 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2809) Sep 12 17:32:31.815898 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2811) Sep 12 17:32:32.203129 kubelet[2504]: E0912 17:32:32.203087 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:32.578314 kubelet[2504]: E0912 17:32:32.578266 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:32.982422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710679599.mount: Deactivated successfully. Sep 12 17:32:34.282737 kubelet[2504]: E0912 17:32:34.282685 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:36.267378 containerd[1459]: time="2025-09-12T17:32:36.267308535Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:36.268132 containerd[1459]: time="2025-09-12T17:32:36.268064621Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:32:36.269264 containerd[1459]: time="2025-09-12T17:32:36.269223939Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:36.271541 containerd[1459]: time="2025-09-12T17:32:36.271511384Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:36.272106 containerd[1459]: time="2025-09-12T17:32:36.272067266Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 5.409049137s" Sep 12 17:32:36.272142 containerd[1459]: time="2025-09-12T17:32:36.272111833Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:32:36.274291 containerd[1459]: time="2025-09-12T17:32:36.274173074Z" level=info msg="CreateContainer within sandbox \"1b3e646b5a0282bf41a455903b3c631d396e504d14f5c4e77403fceeb61c9531\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:32:36.290103 containerd[1459]: time="2025-09-12T17:32:36.290063047Z" level=info msg="CreateContainer within sandbox \"1b3e646b5a0282bf41a455903b3c631d396e504d14f5c4e77403fceeb61c9531\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c5d97b76a4cd9367fa4e3dc3e354c77155cd4904001ab3d0c3ed0cb01ddd5f14\"" Sep 12 17:32:36.291436 containerd[1459]: time="2025-09-12T17:32:36.290533571Z" level=info msg="StartContainer for \"c5d97b76a4cd9367fa4e3dc3e354c77155cd4904001ab3d0c3ed0cb01ddd5f14\"" Sep 12 17:32:36.328990 systemd[1]: Started cri-containerd-c5d97b76a4cd9367fa4e3dc3e354c77155cd4904001ab3d0c3ed0cb01ddd5f14.scope - libcontainer container c5d97b76a4cd9367fa4e3dc3e354c77155cd4904001ab3d0c3ed0cb01ddd5f14. Sep 12 17:32:36.358121 containerd[1459]: time="2025-09-12T17:32:36.358071529Z" level=info msg="StartContainer for \"c5d97b76a4cd9367fa4e3dc3e354c77155cd4904001ab3d0c3ed0cb01ddd5f14\" returns successfully" Sep 12 17:32:36.592973 kubelet[2504]: I0912 17:32:36.592902 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-lxc5c" podStartSLOduration=1.182535864 podStartE2EDuration="6.592882527s" podCreationTimestamp="2025-09-12 17:32:30 +0000 UTC" firstStartedPulling="2025-09-12 17:32:30.862459836 +0000 UTC m=+7.404414011" lastFinishedPulling="2025-09-12 17:32:36.272806489 +0000 UTC m=+12.814760674" observedRunningTime="2025-09-12 17:32:36.592595182 +0000 UTC m=+13.134549367" watchObservedRunningTime="2025-09-12 17:32:36.592882527 +0000 UTC m=+13.134836712" Sep 12 17:32:37.702949 kubelet[2504]: E0912 17:32:37.702625 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:38.589901 kubelet[2504]: E0912 17:32:38.589816 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:41.420690 sudo[1640]: pam_unix(sudo:session): session closed for user root Sep 12 17:32:41.425224 sshd[1637]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:41.429863 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:32:41.430619 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:39346.service: Deactivated successfully. Sep 12 17:32:41.433479 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:32:41.433809 systemd[1]: session-7.scope: Consumed 5.230s CPU time, 159.1M memory peak, 0B memory swap peak. Sep 12 17:32:41.435175 systemd-logind[1442]: Removed session 7. Sep 12 17:32:43.819464 systemd[1]: Created slice kubepods-besteffort-podfa8b9487_cc50_4a21_b812_0fae475020d5.slice - libcontainer container kubepods-besteffort-podfa8b9487_cc50_4a21_b812_0fae475020d5.slice. Sep 12 17:32:43.866843 kubelet[2504]: I0912 17:32:43.866766 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa8b9487-cc50-4a21-b812-0fae475020d5-tigera-ca-bundle\") pod \"calico-typha-cc4898b69-76gl4\" (UID: \"fa8b9487-cc50-4a21-b812-0fae475020d5\") " pod="calico-system/calico-typha-cc4898b69-76gl4" Sep 12 17:32:43.866843 kubelet[2504]: I0912 17:32:43.866829 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fa8b9487-cc50-4a21-b812-0fae475020d5-typha-certs\") pod \"calico-typha-cc4898b69-76gl4\" (UID: \"fa8b9487-cc50-4a21-b812-0fae475020d5\") " pod="calico-system/calico-typha-cc4898b69-76gl4" Sep 12 17:32:43.866843 kubelet[2504]: I0912 17:32:43.866863 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92fvj\" (UniqueName: \"kubernetes.io/projected/fa8b9487-cc50-4a21-b812-0fae475020d5-kube-api-access-92fvj\") pod \"calico-typha-cc4898b69-76gl4\" (UID: \"fa8b9487-cc50-4a21-b812-0fae475020d5\") " pod="calico-system/calico-typha-cc4898b69-76gl4" Sep 12 17:32:44.078319 systemd[1]: Created slice kubepods-besteffort-pod39a5b480_8bdc_49b0_8d7b_28672a29c7ac.slice - libcontainer container kubepods-besteffort-pod39a5b480_8bdc_49b0_8d7b_28672a29c7ac.slice. Sep 12 17:32:44.127871 kubelet[2504]: E0912 17:32:44.127789 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:44.128513 containerd[1459]: time="2025-09-12T17:32:44.128446667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc4898b69-76gl4,Uid:fa8b9487-cc50-4a21-b812-0fae475020d5,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:44.170886 kubelet[2504]: I0912 17:32:44.168063 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-cni-bin-dir\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.170886 kubelet[2504]: I0912 17:32:44.168168 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-cni-log-dir\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.170886 kubelet[2504]: I0912 17:32:44.168207 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-policysync\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.170886 kubelet[2504]: I0912 17:32:44.168233 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-flexvol-driver-host\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.170886 kubelet[2504]: I0912 17:32:44.168262 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-lib-modules\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.171164 kubelet[2504]: I0912 17:32:44.168280 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-tigera-ca-bundle\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.171164 kubelet[2504]: I0912 17:32:44.168298 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-var-run-calico\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.171164 kubelet[2504]: I0912 17:32:44.168317 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4w4\" (UniqueName: \"kubernetes.io/projected/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-kube-api-access-2g4w4\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.171164 kubelet[2504]: I0912 17:32:44.168341 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-cni-net-dir\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.171164 kubelet[2504]: I0912 17:32:44.168364 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-node-certs\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.171411 kubelet[2504]: I0912 17:32:44.168378 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-var-lib-calico\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.171411 kubelet[2504]: I0912 17:32:44.168394 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39a5b480-8bdc-49b0-8d7b-28672a29c7ac-xtables-lock\") pod \"calico-node-5n26d\" (UID: \"39a5b480-8bdc-49b0-8d7b-28672a29c7ac\") " pod="calico-system/calico-node-5n26d" Sep 12 17:32:44.441790 containerd[1459]: time="2025-09-12T17:32:44.441589887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:44.441790 containerd[1459]: time="2025-09-12T17:32:44.441658170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:44.441790 containerd[1459]: time="2025-09-12T17:32:44.441686575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:44.442993 containerd[1459]: time="2025-09-12T17:32:44.442934925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:44.463350 systemd[1]: Started cri-containerd-2cb335c96682853f6a9f54774c03fdb3493d713be01ce3a1183e3640654889f9.scope - libcontainer container 2cb335c96682853f6a9f54774c03fdb3493d713be01ce3a1183e3640654889f9. Sep 12 17:32:44.479483 kubelet[2504]: E0912 17:32:44.477540 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vh8" podUID="948ee8ec-ca22-47e2-8cb0-925053d685b4" Sep 12 17:32:44.511795 containerd[1459]: time="2025-09-12T17:32:44.511617973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc4898b69-76gl4,Uid:fa8b9487-cc50-4a21-b812-0fae475020d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cb335c96682853f6a9f54774c03fdb3493d713be01ce3a1183e3640654889f9\"" Sep 12 17:32:44.512621 kubelet[2504]: E0912 17:32:44.512593 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:44.513696 containerd[1459]: time="2025-09-12T17:32:44.513577723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:32:44.552452 kubelet[2504]: E0912 17:32:44.552414 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.552452 kubelet[2504]: W0912 17:32:44.552436 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.552602 kubelet[2504]: E0912 17:32:44.552479 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.552804 kubelet[2504]: E0912 17:32:44.552785 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.552804 kubelet[2504]: W0912 17:32:44.552796 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.552890 kubelet[2504]: E0912 17:32:44.552805 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.553055 kubelet[2504]: E0912 17:32:44.553042 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.553055 kubelet[2504]: W0912 17:32:44.553052 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.553122 kubelet[2504]: E0912 17:32:44.553061 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.553308 kubelet[2504]: E0912 17:32:44.553294 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.553308 kubelet[2504]: W0912 17:32:44.553304 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.553366 kubelet[2504]: E0912 17:32:44.553312 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.553531 kubelet[2504]: E0912 17:32:44.553512 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.553531 kubelet[2504]: W0912 17:32:44.553522 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.553531 kubelet[2504]: E0912 17:32:44.553530 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.553796 kubelet[2504]: E0912 17:32:44.553779 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.553796 kubelet[2504]: W0912 17:32:44.553789 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.553842 kubelet[2504]: E0912 17:32:44.553798 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.554016 kubelet[2504]: E0912 17:32:44.554004 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.554016 kubelet[2504]: W0912 17:32:44.554014 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.554057 kubelet[2504]: E0912 17:32:44.554022 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.554221 kubelet[2504]: E0912 17:32:44.554209 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.554243 kubelet[2504]: W0912 17:32:44.554220 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.554243 kubelet[2504]: E0912 17:32:44.554229 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.554425 kubelet[2504]: E0912 17:32:44.554413 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.554425 kubelet[2504]: W0912 17:32:44.554423 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.554480 kubelet[2504]: E0912 17:32:44.554430 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.554656 kubelet[2504]: E0912 17:32:44.554644 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.554656 kubelet[2504]: W0912 17:32:44.554653 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.554702 kubelet[2504]: E0912 17:32:44.554661 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.554880 kubelet[2504]: E0912 17:32:44.554867 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.554880 kubelet[2504]: W0912 17:32:44.554877 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.554945 kubelet[2504]: E0912 17:32:44.554886 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.555088 kubelet[2504]: E0912 17:32:44.555075 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.555088 kubelet[2504]: W0912 17:32:44.555085 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.555137 kubelet[2504]: E0912 17:32:44.555092 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.555304 kubelet[2504]: E0912 17:32:44.555293 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.555304 kubelet[2504]: W0912 17:32:44.555302 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.555347 kubelet[2504]: E0912 17:32:44.555310 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.555500 kubelet[2504]: E0912 17:32:44.555489 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.555500 kubelet[2504]: W0912 17:32:44.555498 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.555541 kubelet[2504]: E0912 17:32:44.555505 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.555701 kubelet[2504]: E0912 17:32:44.555687 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.555701 kubelet[2504]: W0912 17:32:44.555696 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.555752 kubelet[2504]: E0912 17:32:44.555704 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.555939 kubelet[2504]: E0912 17:32:44.555926 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.555971 kubelet[2504]: W0912 17:32:44.555943 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.555971 kubelet[2504]: E0912 17:32:44.555958 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.556210 kubelet[2504]: E0912 17:32:44.556195 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.556210 kubelet[2504]: W0912 17:32:44.556206 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.556272 kubelet[2504]: E0912 17:32:44.556214 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.556438 kubelet[2504]: E0912 17:32:44.556426 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.556438 kubelet[2504]: W0912 17:32:44.556436 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.556482 kubelet[2504]: E0912 17:32:44.556444 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.556626 kubelet[2504]: E0912 17:32:44.556613 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.556626 kubelet[2504]: W0912 17:32:44.556622 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.556682 kubelet[2504]: E0912 17:32:44.556629 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.556833 kubelet[2504]: E0912 17:32:44.556821 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.556833 kubelet[2504]: W0912 17:32:44.556830 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.556903 kubelet[2504]: E0912 17:32:44.556837 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.572597 kubelet[2504]: E0912 17:32:44.572549 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.572597 kubelet[2504]: W0912 17:32:44.572589 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.572684 kubelet[2504]: E0912 17:32:44.572621 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.572684 kubelet[2504]: I0912 17:32:44.572662 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2ntr\" (UniqueName: \"kubernetes.io/projected/948ee8ec-ca22-47e2-8cb0-925053d685b4-kube-api-access-f2ntr\") pod \"csi-node-driver-x4vh8\" (UID: \"948ee8ec-ca22-47e2-8cb0-925053d685b4\") " pod="calico-system/csi-node-driver-x4vh8" Sep 12 17:32:44.572995 kubelet[2504]: E0912 17:32:44.572966 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.572995 kubelet[2504]: W0912 17:32:44.572986 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.573052 kubelet[2504]: E0912 17:32:44.573009 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.573052 kubelet[2504]: I0912 17:32:44.573030 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/948ee8ec-ca22-47e2-8cb0-925053d685b4-socket-dir\") pod \"csi-node-driver-x4vh8\" (UID: \"948ee8ec-ca22-47e2-8cb0-925053d685b4\") " pod="calico-system/csi-node-driver-x4vh8" Sep 12 17:32:44.573386 kubelet[2504]: E0912 17:32:44.573368 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.573386 kubelet[2504]: W0912 17:32:44.573384 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.573451 kubelet[2504]: E0912 17:32:44.573403 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.573712 kubelet[2504]: E0912 17:32:44.573693 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.573712 kubelet[2504]: W0912 17:32:44.573704 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.573754 kubelet[2504]: E0912 17:32:44.573718 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.574084 kubelet[2504]: E0912 17:32:44.574048 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.574119 kubelet[2504]: W0912 17:32:44.574084 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.574141 kubelet[2504]: E0912 17:32:44.574125 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.574183 kubelet[2504]: I0912 17:32:44.574161 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/948ee8ec-ca22-47e2-8cb0-925053d685b4-registration-dir\") pod \"csi-node-driver-x4vh8\" (UID: \"948ee8ec-ca22-47e2-8cb0-925053d685b4\") " pod="calico-system/csi-node-driver-x4vh8" Sep 12 17:32:44.574464 kubelet[2504]: E0912 17:32:44.574441 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.574498 kubelet[2504]: W0912 17:32:44.574463 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.574498 kubelet[2504]: E0912 17:32:44.574485 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.574746 kubelet[2504]: E0912 17:32:44.574729 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.574746 kubelet[2504]: W0912 17:32:44.574743 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.574798 kubelet[2504]: E0912 17:32:44.574766 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.575044 kubelet[2504]: E0912 17:32:44.575016 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.575044 kubelet[2504]: W0912 17:32:44.575035 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.575114 kubelet[2504]: E0912 17:32:44.575051 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.575114 kubelet[2504]: I0912 17:32:44.575069 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/948ee8ec-ca22-47e2-8cb0-925053d685b4-kubelet-dir\") pod \"csi-node-driver-x4vh8\" (UID: \"948ee8ec-ca22-47e2-8cb0-925053d685b4\") " pod="calico-system/csi-node-driver-x4vh8" Sep 12 17:32:44.575304 kubelet[2504]: E0912 17:32:44.575285 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.575304 kubelet[2504]: W0912 17:32:44.575298 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.575379 kubelet[2504]: E0912 17:32:44.575311 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.575379 kubelet[2504]: I0912 17:32:44.575325 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/948ee8ec-ca22-47e2-8cb0-925053d685b4-varrun\") pod \"csi-node-driver-x4vh8\" (UID: \"948ee8ec-ca22-47e2-8cb0-925053d685b4\") " pod="calico-system/csi-node-driver-x4vh8" Sep 12 17:32:44.575583 kubelet[2504]: E0912 17:32:44.575567 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.575583 kubelet[2504]: W0912 17:32:44.575578 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.575645 kubelet[2504]: E0912 17:32:44.575615 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.575791 kubelet[2504]: E0912 17:32:44.575777 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.575791 kubelet[2504]: W0912 17:32:44.575788 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.575841 kubelet[2504]: E0912 17:32:44.575823 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.576032 kubelet[2504]: E0912 17:32:44.576017 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.576032 kubelet[2504]: W0912 17:32:44.576028 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.576074 kubelet[2504]: E0912 17:32:44.576044 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.576250 kubelet[2504]: E0912 17:32:44.576235 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.576250 kubelet[2504]: W0912 17:32:44.576246 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.576306 kubelet[2504]: E0912 17:32:44.576254 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.576484 kubelet[2504]: E0912 17:32:44.576467 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.576484 kubelet[2504]: W0912 17:32:44.576478 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.576484 kubelet[2504]: E0912 17:32:44.576486 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.576690 kubelet[2504]: E0912 17:32:44.576675 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.576690 kubelet[2504]: W0912 17:32:44.576686 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.576751 kubelet[2504]: E0912 17:32:44.576694 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.676666 kubelet[2504]: E0912 17:32:44.676624 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.676666 kubelet[2504]: W0912 17:32:44.676650 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.676666 kubelet[2504]: E0912 17:32:44.676675 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.676988 kubelet[2504]: E0912 17:32:44.676962 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.676988 kubelet[2504]: W0912 17:32:44.676975 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.677049 kubelet[2504]: E0912 17:32:44.676991 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.677223 kubelet[2504]: E0912 17:32:44.677205 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.677223 kubelet[2504]: W0912 17:32:44.677218 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.677277 kubelet[2504]: E0912 17:32:44.677232 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.677457 kubelet[2504]: E0912 17:32:44.677436 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.677457 kubelet[2504]: W0912 17:32:44.677449 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.677540 kubelet[2504]: E0912 17:32:44.677464 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.677880 kubelet[2504]: E0912 17:32:44.677822 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.677880 kubelet[2504]: W0912 17:32:44.677868 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.677925 kubelet[2504]: E0912 17:32:44.677900 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.678129 kubelet[2504]: E0912 17:32:44.678111 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.678129 kubelet[2504]: W0912 17:32:44.678122 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.678173 kubelet[2504]: E0912 17:32:44.678135 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.678333 kubelet[2504]: E0912 17:32:44.678319 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.678333 kubelet[2504]: W0912 17:32:44.678328 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.678395 kubelet[2504]: E0912 17:32:44.678356 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.678541 kubelet[2504]: E0912 17:32:44.678520 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.678541 kubelet[2504]: W0912 17:32:44.678530 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.678614 kubelet[2504]: E0912 17:32:44.678556 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.678732 kubelet[2504]: E0912 17:32:44.678719 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.678732 kubelet[2504]: W0912 17:32:44.678728 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.678840 kubelet[2504]: E0912 17:32:44.678789 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.678947 kubelet[2504]: E0912 17:32:44.678934 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.678947 kubelet[2504]: W0912 17:32:44.678944 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.679005 kubelet[2504]: E0912 17:32:44.678986 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.679157 kubelet[2504]: E0912 17:32:44.679146 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.679157 kubelet[2504]: W0912 17:32:44.679155 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.679200 kubelet[2504]: E0912 17:32:44.679187 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.679397 kubelet[2504]: E0912 17:32:44.679385 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.679397 kubelet[2504]: W0912 17:32:44.679395 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.679437 kubelet[2504]: E0912 17:32:44.679409 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.679639 kubelet[2504]: E0912 17:32:44.679622 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.679639 kubelet[2504]: W0912 17:32:44.679635 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.679701 kubelet[2504]: E0912 17:32:44.679653 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.679892 kubelet[2504]: E0912 17:32:44.679875 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.679892 kubelet[2504]: W0912 17:32:44.679888 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.679959 kubelet[2504]: E0912 17:32:44.679904 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.680137 kubelet[2504]: E0912 17:32:44.680123 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.680181 kubelet[2504]: W0912 17:32:44.680137 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.680181 kubelet[2504]: E0912 17:32:44.680152 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.680467 kubelet[2504]: E0912 17:32:44.680428 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.680467 kubelet[2504]: W0912 17:32:44.680441 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.680604 kubelet[2504]: E0912 17:32:44.680474 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.680731 kubelet[2504]: E0912 17:32:44.680698 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.680731 kubelet[2504]: W0912 17:32:44.680710 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.680806 kubelet[2504]: E0912 17:32:44.680777 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.680973 kubelet[2504]: E0912 17:32:44.680957 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.680973 kubelet[2504]: W0912 17:32:44.680970 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.681036 kubelet[2504]: E0912 17:32:44.681001 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.681409 kubelet[2504]: E0912 17:32:44.681374 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.681409 kubelet[2504]: W0912 17:32:44.681398 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.681698 kubelet[2504]: E0912 17:32:44.681614 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.681933 containerd[1459]: time="2025-09-12T17:32:44.681896015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5n26d,Uid:39a5b480-8bdc-49b0-8d7b-28672a29c7ac,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:44.682025 kubelet[2504]: E0912 17:32:44.681935 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.682025 kubelet[2504]: W0912 17:32:44.681946 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.682025 kubelet[2504]: E0912 17:32:44.681964 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.682175 kubelet[2504]: E0912 17:32:44.682163 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.682175 kubelet[2504]: W0912 17:32:44.682173 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.682234 kubelet[2504]: E0912 17:32:44.682183 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.682398 kubelet[2504]: E0912 17:32:44.682384 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.682398 kubelet[2504]: W0912 17:32:44.682394 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.682497 kubelet[2504]: E0912 17:32:44.682471 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.682630 kubelet[2504]: E0912 17:32:44.682608 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.682630 kubelet[2504]: W0912 17:32:44.682619 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.682682 kubelet[2504]: E0912 17:32:44.682634 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.683082 kubelet[2504]: E0912 17:32:44.683054 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.683082 kubelet[2504]: W0912 17:32:44.683070 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.683137 kubelet[2504]: E0912 17:32:44.683101 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.683393 kubelet[2504]: E0912 17:32:44.683376 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.683393 kubelet[2504]: W0912 17:32:44.683391 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.683470 kubelet[2504]: E0912 17:32:44.683404 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.690666 kubelet[2504]: E0912 17:32:44.690640 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:44.690666 kubelet[2504]: W0912 17:32:44.690659 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:44.690793 kubelet[2504]: E0912 17:32:44.690679 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:44.711751 containerd[1459]: time="2025-09-12T17:32:44.711515918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:44.711751 containerd[1459]: time="2025-09-12T17:32:44.711601164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:44.711751 containerd[1459]: time="2025-09-12T17:32:44.711616023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:44.711979 containerd[1459]: time="2025-09-12T17:32:44.711709194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:44.737021 systemd[1]: Started cri-containerd-0d0deee9d29b5ec3e31e0f2f8ed35708852feb5aae0e8938a67d1b592115bf37.scope - libcontainer container 0d0deee9d29b5ec3e31e0f2f8ed35708852feb5aae0e8938a67d1b592115bf37. Sep 12 17:32:44.761964 containerd[1459]: time="2025-09-12T17:32:44.761897540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5n26d,Uid:39a5b480-8bdc-49b0-8d7b-28672a29c7ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d0deee9d29b5ec3e31e0f2f8ed35708852feb5aae0e8938a67d1b592115bf37\"" Sep 12 17:32:46.552532 kubelet[2504]: E0912 17:32:46.552465 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vh8" podUID="948ee8ec-ca22-47e2-8cb0-925053d685b4" Sep 12 17:32:46.565382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836992750.mount: Deactivated successfully. Sep 12 17:32:47.583891 containerd[1459]: time="2025-09-12T17:32:47.583820662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:47.584642 containerd[1459]: time="2025-09-12T17:32:47.584556894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:32:47.585779 containerd[1459]: time="2025-09-12T17:32:47.585737575Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:47.588751 containerd[1459]: time="2025-09-12T17:32:47.588649943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:47.589208 containerd[1459]: time="2025-09-12T17:32:47.589154048Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.075531748s" Sep 12 17:32:47.589208 containerd[1459]: time="2025-09-12T17:32:47.589198092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:32:47.597103 containerd[1459]: time="2025-09-12T17:32:47.597055375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:32:47.622094 containerd[1459]: time="2025-09-12T17:32:47.622032327Z" level=info msg="CreateContainer within sandbox \"2cb335c96682853f6a9f54774c03fdb3493d713be01ce3a1183e3640654889f9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:32:47.638273 containerd[1459]: time="2025-09-12T17:32:47.638168687Z" level=info msg="CreateContainer within sandbox \"2cb335c96682853f6a9f54774c03fdb3493d713be01ce3a1183e3640654889f9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d508c9b9f5be9af18f99b804f657d4d61c959417944fee9e134360475cc02746\"" Sep 12 17:32:47.641942 containerd[1459]: time="2025-09-12T17:32:47.641891600Z" level=info msg="StartContainer for \"d508c9b9f5be9af18f99b804f657d4d61c959417944fee9e134360475cc02746\"" Sep 12 17:32:47.677074 systemd[1]: Started cri-containerd-d508c9b9f5be9af18f99b804f657d4d61c959417944fee9e134360475cc02746.scope - libcontainer container d508c9b9f5be9af18f99b804f657d4d61c959417944fee9e134360475cc02746. Sep 12 17:32:47.726818 containerd[1459]: time="2025-09-12T17:32:47.726749739Z" level=info msg="StartContainer for \"d508c9b9f5be9af18f99b804f657d4d61c959417944fee9e134360475cc02746\" returns successfully" Sep 12 17:32:48.556313 kubelet[2504]: E0912 17:32:48.556257 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vh8" podUID="948ee8ec-ca22-47e2-8cb0-925053d685b4" Sep 12 17:32:48.622198 kubelet[2504]: E0912 17:32:48.622145 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:48.636654 kubelet[2504]: I0912 17:32:48.636583 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cc4898b69-76gl4" podStartSLOduration=2.5523156179999997 podStartE2EDuration="5.635727696s" podCreationTimestamp="2025-09-12 17:32:43 +0000 UTC" firstStartedPulling="2025-09-12 17:32:44.513247111 +0000 UTC m=+21.055201296" lastFinishedPulling="2025-09-12 17:32:47.596659189 +0000 UTC m=+24.138613374" observedRunningTime="2025-09-12 17:32:48.634660085 +0000 UTC m=+25.176614280" watchObservedRunningTime="2025-09-12 17:32:48.635727696 +0000 UTC m=+25.177681881" Sep 12 17:32:48.678323 kubelet[2504]: E0912 17:32:48.678274 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.678323 kubelet[2504]: W0912 17:32:48.678300 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.678955 kubelet[2504]: E0912 17:32:48.678933 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.679297 kubelet[2504]: E0912 17:32:48.679277 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.679297 kubelet[2504]: W0912 17:32:48.679288 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.679297 kubelet[2504]: E0912 17:32:48.679297 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.679524 kubelet[2504]: E0912 17:32:48.679506 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.679524 kubelet[2504]: W0912 17:32:48.679515 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.679590 kubelet[2504]: E0912 17:32:48.679527 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.679799 kubelet[2504]: E0912 17:32:48.679770 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.679799 kubelet[2504]: W0912 17:32:48.679780 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.679799 kubelet[2504]: E0912 17:32:48.679788 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.680074 kubelet[2504]: E0912 17:32:48.680058 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.680074 kubelet[2504]: W0912 17:32:48.680070 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.680141 kubelet[2504]: E0912 17:32:48.680080 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.680291 kubelet[2504]: E0912 17:32:48.680276 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.680291 kubelet[2504]: W0912 17:32:48.680286 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.680358 kubelet[2504]: E0912 17:32:48.680295 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.680511 kubelet[2504]: E0912 17:32:48.680496 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.680511 kubelet[2504]: W0912 17:32:48.680507 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.680598 kubelet[2504]: E0912 17:32:48.680515 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.680750 kubelet[2504]: E0912 17:32:48.680714 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.680750 kubelet[2504]: W0912 17:32:48.680731 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.680750 kubelet[2504]: E0912 17:32:48.680745 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.681050 kubelet[2504]: E0912 17:32:48.681031 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.681050 kubelet[2504]: W0912 17:32:48.681049 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.681103 kubelet[2504]: E0912 17:32:48.681063 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.681316 kubelet[2504]: E0912 17:32:48.681297 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.681316 kubelet[2504]: W0912 17:32:48.681312 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.681376 kubelet[2504]: E0912 17:32:48.681325 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.681564 kubelet[2504]: E0912 17:32:48.681546 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.681564 kubelet[2504]: W0912 17:32:48.681561 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.681624 kubelet[2504]: E0912 17:32:48.681573 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.681828 kubelet[2504]: E0912 17:32:48.681808 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.681828 kubelet[2504]: W0912 17:32:48.681825 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.681912 kubelet[2504]: E0912 17:32:48.681837 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.682137 kubelet[2504]: E0912 17:32:48.682117 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.682137 kubelet[2504]: W0912 17:32:48.682134 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.682194 kubelet[2504]: E0912 17:32:48.682146 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.682359 kubelet[2504]: E0912 17:32:48.682342 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.682359 kubelet[2504]: W0912 17:32:48.682356 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.682412 kubelet[2504]: E0912 17:32:48.682367 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.682579 kubelet[2504]: E0912 17:32:48.682562 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.682579 kubelet[2504]: W0912 17:32:48.682576 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.682631 kubelet[2504]: E0912 17:32:48.682586 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.706741 kubelet[2504]: E0912 17:32:48.706678 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.706741 kubelet[2504]: W0912 17:32:48.706710 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.706741 kubelet[2504]: E0912 17:32:48.706737 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.707118 kubelet[2504]: E0912 17:32:48.707085 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.707118 kubelet[2504]: W0912 17:32:48.707105 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.707213 kubelet[2504]: E0912 17:32:48.707125 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.707490 kubelet[2504]: E0912 17:32:48.707453 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.707490 kubelet[2504]: W0912 17:32:48.707474 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.707569 kubelet[2504]: E0912 17:32:48.707495 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.707838 kubelet[2504]: E0912 17:32:48.707816 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.707838 kubelet[2504]: W0912 17:32:48.707831 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.707929 kubelet[2504]: E0912 17:32:48.707870 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.708169 kubelet[2504]: E0912 17:32:48.708152 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.708169 kubelet[2504]: W0912 17:32:48.708165 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.708255 kubelet[2504]: E0912 17:32:48.708228 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.708405 kubelet[2504]: E0912 17:32:48.708389 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.708405 kubelet[2504]: W0912 17:32:48.708402 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.708470 kubelet[2504]: E0912 17:32:48.708450 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.708610 kubelet[2504]: E0912 17:32:48.708595 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.708610 kubelet[2504]: W0912 17:32:48.708607 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.708672 kubelet[2504]: E0912 17:32:48.708641 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.708826 kubelet[2504]: E0912 17:32:48.708812 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.708826 kubelet[2504]: W0912 17:32:48.708823 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.708880 kubelet[2504]: E0912 17:32:48.708838 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.709154 kubelet[2504]: E0912 17:32:48.709136 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.709154 kubelet[2504]: W0912 17:32:48.709151 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.709218 kubelet[2504]: E0912 17:32:48.709169 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.709594 kubelet[2504]: E0912 17:32:48.709565 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.709594 kubelet[2504]: W0912 17:32:48.709584 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.709672 kubelet[2504]: E0912 17:32:48.709602 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.710007 kubelet[2504]: E0912 17:32:48.709958 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.710007 kubelet[2504]: W0912 17:32:48.709982 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.710007 kubelet[2504]: E0912 17:32:48.710016 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.710282 kubelet[2504]: E0912 17:32:48.710264 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.710282 kubelet[2504]: W0912 17:32:48.710279 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.710329 kubelet[2504]: E0912 17:32:48.710297 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.710590 kubelet[2504]: E0912 17:32:48.710554 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.710590 kubelet[2504]: W0912 17:32:48.710573 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.710683 kubelet[2504]: E0912 17:32:48.710593 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.710932 kubelet[2504]: E0912 17:32:48.710912 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.710932 kubelet[2504]: W0912 17:32:48.710929 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.711029 kubelet[2504]: E0912 17:32:48.710951 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.711294 kubelet[2504]: E0912 17:32:48.711272 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.711294 kubelet[2504]: W0912 17:32:48.711290 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.711367 kubelet[2504]: E0912 17:32:48.711310 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.711884 kubelet[2504]: E0912 17:32:48.711866 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.711884 kubelet[2504]: W0912 17:32:48.711878 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.711969 kubelet[2504]: E0912 17:32:48.711893 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.712184 kubelet[2504]: E0912 17:32:48.712168 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.712184 kubelet[2504]: W0912 17:32:48.712179 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.712245 kubelet[2504]: E0912 17:32:48.712188 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:48.712588 kubelet[2504]: E0912 17:32:48.712564 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:32:48.712588 kubelet[2504]: W0912 17:32:48.712576 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:32:48.712588 kubelet[2504]: E0912 17:32:48.712584 2504 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:32:49.360572 containerd[1459]: time="2025-09-12T17:32:49.360500294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:49.362185 containerd[1459]: time="2025-09-12T17:32:49.362110619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:32:49.363923 containerd[1459]: time="2025-09-12T17:32:49.363892555Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:49.367124 containerd[1459]: time="2025-09-12T17:32:49.367058902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:49.368054 containerd[1459]: time="2025-09-12T17:32:49.367895726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.770785135s" Sep 12 17:32:49.368054 containerd[1459]: time="2025-09-12T17:32:49.367952395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:32:49.370567 containerd[1459]: time="2025-09-12T17:32:49.370447817Z" level=info msg="CreateContainer within sandbox \"0d0deee9d29b5ec3e31e0f2f8ed35708852feb5aae0e8938a67d1b592115bf37\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:32:49.410143 containerd[1459]: time="2025-09-12T17:32:49.410076509Z" level=info msg="CreateContainer within sandbox \"0d0deee9d29b5ec3e31e0f2f8ed35708852feb5aae0e8938a67d1b592115bf37\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d62f5c8adf34db7bc62e3f2b7acfdd9b7c3a474d69c7ac2f53813f63ed631c43\"" Sep 12 17:32:49.412722 containerd[1459]: time="2025-09-12T17:32:49.412645223Z" level=info msg="StartContainer for \"d62f5c8adf34db7bc62e3f2b7acfdd9b7c3a474d69c7ac2f53813f63ed631c43\"" Sep 12 17:32:49.448022 systemd[1]: Started cri-containerd-d62f5c8adf34db7bc62e3f2b7acfdd9b7c3a474d69c7ac2f53813f63ed631c43.scope - libcontainer container d62f5c8adf34db7bc62e3f2b7acfdd9b7c3a474d69c7ac2f53813f63ed631c43. Sep 12 17:32:49.484667 containerd[1459]: time="2025-09-12T17:32:49.484592118Z" level=info msg="StartContainer for \"d62f5c8adf34db7bc62e3f2b7acfdd9b7c3a474d69c7ac2f53813f63ed631c43\" returns successfully" Sep 12 17:32:49.504286 systemd[1]: cri-containerd-d62f5c8adf34db7bc62e3f2b7acfdd9b7c3a474d69c7ac2f53813f63ed631c43.scope: Deactivated successfully. Sep 12 17:32:49.612404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d62f5c8adf34db7bc62e3f2b7acfdd9b7c3a474d69c7ac2f53813f63ed631c43-rootfs.mount: Deactivated successfully. Sep 12 17:32:49.915357 kubelet[2504]: I0912 17:32:49.914898 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:32:49.915357 kubelet[2504]: E0912 17:32:49.915227 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:49.932470 containerd[1459]: time="2025-09-12T17:32:49.931896279Z" level=info msg="shim disconnected" id=d62f5c8adf34db7bc62e3f2b7acfdd9b7c3a474d69c7ac2f53813f63ed631c43 namespace=k8s.io Sep 12 17:32:49.932470 containerd[1459]: time="2025-09-12T17:32:49.931987295Z" level=warning msg="cleaning up after shim disconnected" id=d62f5c8adf34db7bc62e3f2b7acfdd9b7c3a474d69c7ac2f53813f63ed631c43 namespace=k8s.io Sep 12 17:32:49.932470 containerd[1459]: time="2025-09-12T17:32:49.932001952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:32:50.551748 kubelet[2504]: E0912 17:32:50.551691 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vh8" podUID="948ee8ec-ca22-47e2-8cb0-925053d685b4" Sep 12 17:32:50.632315 containerd[1459]: time="2025-09-12T17:32:50.632265063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:32:52.552028 kubelet[2504]: E0912 17:32:52.551945 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vh8" podUID="948ee8ec-ca22-47e2-8cb0-925053d685b4" Sep 12 17:32:54.552351 kubelet[2504]: E0912 17:32:54.552278 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vh8" podUID="948ee8ec-ca22-47e2-8cb0-925053d685b4" Sep 12 17:32:55.338653 containerd[1459]: time="2025-09-12T17:32:55.338586864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:55.339417 containerd[1459]: time="2025-09-12T17:32:55.339372382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:32:55.340557 containerd[1459]: time="2025-09-12T17:32:55.340507041Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:55.342689 containerd[1459]: time="2025-09-12T17:32:55.342645655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:55.343442 containerd[1459]: time="2025-09-12T17:32:55.343398110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.711088861s" Sep 12 17:32:55.343442 containerd[1459]: time="2025-09-12T17:32:55.343440021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:32:55.357740 containerd[1459]: time="2025-09-12T17:32:55.357695925Z" level=info msg="CreateContainer within sandbox \"0d0deee9d29b5ec3e31e0f2f8ed35708852feb5aae0e8938a67d1b592115bf37\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:32:55.392651 containerd[1459]: time="2025-09-12T17:32:55.392584434Z" level=info msg="CreateContainer within sandbox \"0d0deee9d29b5ec3e31e0f2f8ed35708852feb5aae0e8938a67d1b592115bf37\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c2df79c12a4895b564cf47cc5bde011706ad37af0dc6c3015f7fa6a943162516\"" Sep 12 17:32:55.393292 containerd[1459]: time="2025-09-12T17:32:55.393264239Z" level=info msg="StartContainer for \"c2df79c12a4895b564cf47cc5bde011706ad37af0dc6c3015f7fa6a943162516\"" Sep 12 17:32:55.428035 systemd[1]: Started cri-containerd-c2df79c12a4895b564cf47cc5bde011706ad37af0dc6c3015f7fa6a943162516.scope - libcontainer container c2df79c12a4895b564cf47cc5bde011706ad37af0dc6c3015f7fa6a943162516. Sep 12 17:32:55.462611 containerd[1459]: time="2025-09-12T17:32:55.462542030Z" level=info msg="StartContainer for \"c2df79c12a4895b564cf47cc5bde011706ad37af0dc6c3015f7fa6a943162516\" returns successfully" Sep 12 17:32:56.552643 kubelet[2504]: E0912 17:32:56.552560 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x4vh8" podUID="948ee8ec-ca22-47e2-8cb0-925053d685b4" Sep 12 17:32:56.785751 containerd[1459]: time="2025-09-12T17:32:56.785689670Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:32:56.789456 systemd[1]: cri-containerd-c2df79c12a4895b564cf47cc5bde011706ad37af0dc6c3015f7fa6a943162516.scope: Deactivated successfully. Sep 12 17:32:56.810761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2df79c12a4895b564cf47cc5bde011706ad37af0dc6c3015f7fa6a943162516-rootfs.mount: Deactivated successfully. Sep 12 17:32:56.849257 kubelet[2504]: I0912 17:32:56.849210 2504 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:32:57.031520 systemd[1]: Created slice kubepods-burstable-podc771cb80_b69d_4703_831e_b3564032fbcd.slice - libcontainer container kubepods-burstable-podc771cb80_b69d_4703_831e_b3564032fbcd.slice. Sep 12 17:32:57.036567 kubelet[2504]: I0912 17:32:57.036513 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9aeaba7e-9c59-470f-82df-9b9fbf5dd0db-calico-apiserver-certs\") pod \"calico-apiserver-6778dc6895-fdvxx\" (UID: \"9aeaba7e-9c59-470f-82df-9b9fbf5dd0db\") " pod="calico-apiserver/calico-apiserver-6778dc6895-fdvxx" Sep 12 17:32:57.036567 kubelet[2504]: I0912 17:32:57.036551 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7551bc19-475c-4a24-99ba-aad4f3dfccb8-whisker-backend-key-pair\") pod \"whisker-78cb64f49f-nhnl8\" (UID: \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\") " pod="calico-system/whisker-78cb64f49f-nhnl8" Sep 12 17:32:57.036567 kubelet[2504]: I0912 17:32:57.036571 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2bc8466e-fde1-48f6-8cd4-b09a878f3fd7-calico-apiserver-certs\") pod \"calico-apiserver-6778dc6895-shqls\" (UID: \"2bc8466e-fde1-48f6-8cd4-b09a878f3fd7\") " pod="calico-apiserver/calico-apiserver-6778dc6895-shqls" Sep 12 17:32:57.036847 kubelet[2504]: I0912 17:32:57.036588 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lft6j\" (UniqueName: \"kubernetes.io/projected/2bc8466e-fde1-48f6-8cd4-b09a878f3fd7-kube-api-access-lft6j\") pod \"calico-apiserver-6778dc6895-shqls\" (UID: \"2bc8466e-fde1-48f6-8cd4-b09a878f3fd7\") " pod="calico-apiserver/calico-apiserver-6778dc6895-shqls" Sep 12 17:32:57.036847 kubelet[2504]: I0912 17:32:57.036611 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7551bc19-475c-4a24-99ba-aad4f3dfccb8-whisker-ca-bundle\") pod \"whisker-78cb64f49f-nhnl8\" (UID: \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\") " pod="calico-system/whisker-78cb64f49f-nhnl8" Sep 12 17:32:57.036847 kubelet[2504]: I0912 17:32:57.036657 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkn9d\" (UniqueName: \"kubernetes.io/projected/9139155a-95e6-4e1d-8bf7-6c551b8fbb1a-kube-api-access-hkn9d\") pod \"coredns-668d6bf9bc-bbp9v\" (UID: \"9139155a-95e6-4e1d-8bf7-6c551b8fbb1a\") " pod="kube-system/coredns-668d6bf9bc-bbp9v" Sep 12 17:32:57.036847 kubelet[2504]: I0912 17:32:57.036696 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjvrj\" (UniqueName: \"kubernetes.io/projected/9aeaba7e-9c59-470f-82df-9b9fbf5dd0db-kube-api-access-fjvrj\") pod \"calico-apiserver-6778dc6895-fdvxx\" (UID: \"9aeaba7e-9c59-470f-82df-9b9fbf5dd0db\") " pod="calico-apiserver/calico-apiserver-6778dc6895-fdvxx" Sep 12 17:32:57.036847 kubelet[2504]: I0912 17:32:57.036716 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c771cb80-b69d-4703-831e-b3564032fbcd-config-volume\") pod \"coredns-668d6bf9bc-hlnxj\" (UID: \"c771cb80-b69d-4703-831e-b3564032fbcd\") " pod="kube-system/coredns-668d6bf9bc-hlnxj" Sep 12 17:32:57.037040 kubelet[2504]: I0912 17:32:57.036734 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9139155a-95e6-4e1d-8bf7-6c551b8fbb1a-config-volume\") pod \"coredns-668d6bf9bc-bbp9v\" (UID: \"9139155a-95e6-4e1d-8bf7-6c551b8fbb1a\") " pod="kube-system/coredns-668d6bf9bc-bbp9v" Sep 12 17:32:57.037040 kubelet[2504]: I0912 17:32:57.036781 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pbg\" (UniqueName: \"kubernetes.io/projected/7551bc19-475c-4a24-99ba-aad4f3dfccb8-kube-api-access-t5pbg\") pod \"whisker-78cb64f49f-nhnl8\" (UID: \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\") " pod="calico-system/whisker-78cb64f49f-nhnl8" Sep 12 17:32:57.037040 kubelet[2504]: I0912 17:32:57.036826 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskws\" (UniqueName: \"kubernetes.io/projected/c771cb80-b69d-4703-831e-b3564032fbcd-kube-api-access-jskws\") pod \"coredns-668d6bf9bc-hlnxj\" (UID: \"c771cb80-b69d-4703-831e-b3564032fbcd\") " pod="kube-system/coredns-668d6bf9bc-hlnxj" Sep 12 17:32:57.038002 systemd[1]: Created slice kubepods-besteffort-poda81211fa_8f2f_44d0_8a0d_86148fe788e8.slice - libcontainer container kubepods-besteffort-poda81211fa_8f2f_44d0_8a0d_86148fe788e8.slice. Sep 12 17:32:57.042114 systemd[1]: Created slice kubepods-burstable-pod9139155a_95e6_4e1d_8bf7_6c551b8fbb1a.slice - libcontainer container kubepods-burstable-pod9139155a_95e6_4e1d_8bf7_6c551b8fbb1a.slice. Sep 12 17:32:57.046780 systemd[1]: Created slice kubepods-besteffort-pod9aeaba7e_9c59_470f_82df_9b9fbf5dd0db.slice - libcontainer container kubepods-besteffort-pod9aeaba7e_9c59_470f_82df_9b9fbf5dd0db.slice. Sep 12 17:32:57.052387 systemd[1]: Created slice kubepods-besteffort-pod52776af4_cb40_450c_a1f6_9e31fbaa89d8.slice - libcontainer container kubepods-besteffort-pod52776af4_cb40_450c_a1f6_9e31fbaa89d8.slice. Sep 12 17:32:57.055758 systemd[1]: Created slice kubepods-besteffort-pod7551bc19_475c_4a24_99ba_aad4f3dfccb8.slice - libcontainer container kubepods-besteffort-pod7551bc19_475c_4a24_99ba_aad4f3dfccb8.slice. Sep 12 17:32:57.060282 systemd[1]: Created slice kubepods-besteffort-pod2bc8466e_fde1_48f6_8cd4_b09a878f3fd7.slice - libcontainer container kubepods-besteffort-pod2bc8466e_fde1_48f6_8cd4_b09a878f3fd7.slice. Sep 12 17:32:57.081821 containerd[1459]: time="2025-09-12T17:32:57.081627203Z" level=info msg="shim disconnected" id=c2df79c12a4895b564cf47cc5bde011706ad37af0dc6c3015f7fa6a943162516 namespace=k8s.io Sep 12 17:32:57.081821 containerd[1459]: time="2025-09-12T17:32:57.081694682Z" level=warning msg="cleaning up after shim disconnected" id=c2df79c12a4895b564cf47cc5bde011706ad37af0dc6c3015f7fa6a943162516 namespace=k8s.io Sep 12 17:32:57.081821 containerd[1459]: time="2025-09-12T17:32:57.081703930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:32:57.139650 kubelet[2504]: I0912 17:32:57.139490 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52776af4-cb40-450c-a1f6-9e31fbaa89d8-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-95vvq\" (UID: \"52776af4-cb40-450c-a1f6-9e31fbaa89d8\") " pod="calico-system/goldmane-54d579b49d-95vvq" Sep 12 17:32:57.139894 kubelet[2504]: I0912 17:32:57.139727 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52776af4-cb40-450c-a1f6-9e31fbaa89d8-config\") pod \"goldmane-54d579b49d-95vvq\" (UID: \"52776af4-cb40-450c-a1f6-9e31fbaa89d8\") " pod="calico-system/goldmane-54d579b49d-95vvq" Sep 12 17:32:57.139894 kubelet[2504]: I0912 17:32:57.139809 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/52776af4-cb40-450c-a1f6-9e31fbaa89d8-goldmane-key-pair\") pod \"goldmane-54d579b49d-95vvq\" (UID: \"52776af4-cb40-450c-a1f6-9e31fbaa89d8\") " pod="calico-system/goldmane-54d579b49d-95vvq" Sep 12 17:32:57.139999 kubelet[2504]: I0912 17:32:57.139893 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a81211fa-8f2f-44d0-8a0d-86148fe788e8-tigera-ca-bundle\") pod \"calico-kube-controllers-8b99944df-4v9pg\" (UID: \"a81211fa-8f2f-44d0-8a0d-86148fe788e8\") " pod="calico-system/calico-kube-controllers-8b99944df-4v9pg" Sep 12 17:32:57.139999 kubelet[2504]: I0912 17:32:57.139959 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9f5x\" (UniqueName: \"kubernetes.io/projected/52776af4-cb40-450c-a1f6-9e31fbaa89d8-kube-api-access-m9f5x\") pod \"goldmane-54d579b49d-95vvq\" (UID: \"52776af4-cb40-450c-a1f6-9e31fbaa89d8\") " pod="calico-system/goldmane-54d579b49d-95vvq" Sep 12 17:32:57.139999 kubelet[2504]: I0912 17:32:57.139977 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fw9k\" (UniqueName: \"kubernetes.io/projected/a81211fa-8f2f-44d0-8a0d-86148fe788e8-kube-api-access-8fw9k\") pod \"calico-kube-controllers-8b99944df-4v9pg\" (UID: \"a81211fa-8f2f-44d0-8a0d-86148fe788e8\") " pod="calico-system/calico-kube-controllers-8b99944df-4v9pg" Sep 12 17:32:57.334282 kubelet[2504]: E0912 17:32:57.334089 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:57.335384 containerd[1459]: time="2025-09-12T17:32:57.335335734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hlnxj,Uid:c771cb80-b69d-4703-831e-b3564032fbcd,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:57.341169 containerd[1459]: time="2025-09-12T17:32:57.341130373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b99944df-4v9pg,Uid:a81211fa-8f2f-44d0-8a0d-86148fe788e8,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:57.344976 kubelet[2504]: E0912 17:32:57.344940 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:57.345246 containerd[1459]: time="2025-09-12T17:32:57.345221056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbp9v,Uid:9139155a-95e6-4e1d-8bf7-6c551b8fbb1a,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:57.349840 containerd[1459]: time="2025-09-12T17:32:57.349797791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778dc6895-fdvxx,Uid:9aeaba7e-9c59-470f-82df-9b9fbf5dd0db,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:32:57.355476 containerd[1459]: time="2025-09-12T17:32:57.355406845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-95vvq,Uid:52776af4-cb40-450c-a1f6-9e31fbaa89d8,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:57.358463 containerd[1459]: time="2025-09-12T17:32:57.358434720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cb64f49f-nhnl8,Uid:7551bc19-475c-4a24-99ba-aad4f3dfccb8,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:57.363838 containerd[1459]: time="2025-09-12T17:32:57.363802732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778dc6895-shqls,Uid:2bc8466e-fde1-48f6-8cd4-b09a878f3fd7,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:32:57.449681 containerd[1459]: time="2025-09-12T17:32:57.449606308Z" level=error msg="Failed to destroy network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.450168 containerd[1459]: time="2025-09-12T17:32:57.450117669Z" level=error msg="encountered an error cleaning up failed sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.450384 containerd[1459]: time="2025-09-12T17:32:57.450183876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b99944df-4v9pg,Uid:a81211fa-8f2f-44d0-8a0d-86148fe788e8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.454582 containerd[1459]: time="2025-09-12T17:32:57.454529337Z" level=error msg="Failed to destroy network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.454882 containerd[1459]: time="2025-09-12T17:32:57.454841606Z" level=error msg="encountered an error cleaning up failed sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.454940 containerd[1459]: time="2025-09-12T17:32:57.454897192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hlnxj,Uid:c771cb80-b69d-4703-831e-b3564032fbcd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.459250 kubelet[2504]: E0912 17:32:57.459178 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.459346 kubelet[2504]: E0912 17:32:57.459230 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.459346 kubelet[2504]: E0912 17:32:57.459290 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hlnxj" Sep 12 17:32:57.459346 kubelet[2504]: E0912 17:32:57.459325 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hlnxj" Sep 12 17:32:57.459451 kubelet[2504]: E0912 17:32:57.459396 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hlnxj_kube-system(c771cb80-b69d-4703-831e-b3564032fbcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hlnxj_kube-system(c771cb80-b69d-4703-831e-b3564032fbcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hlnxj" podUID="c771cb80-b69d-4703-831e-b3564032fbcd" Sep 12 17:32:57.459451 kubelet[2504]: E0912 17:32:57.459324 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b99944df-4v9pg" Sep 12 17:32:57.459564 kubelet[2504]: E0912 17:32:57.459447 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b99944df-4v9pg" Sep 12 17:32:57.459597 kubelet[2504]: E0912 17:32:57.459568 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8b99944df-4v9pg_calico-system(a81211fa-8f2f-44d0-8a0d-86148fe788e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8b99944df-4v9pg_calico-system(a81211fa-8f2f-44d0-8a0d-86148fe788e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b99944df-4v9pg" podUID="a81211fa-8f2f-44d0-8a0d-86148fe788e8" Sep 12 17:32:57.648158 containerd[1459]: time="2025-09-12T17:32:57.647993013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:32:57.649557 kubelet[2504]: I0912 17:32:57.649522 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:32:57.655155 kubelet[2504]: I0912 17:32:57.654402 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:32:57.681403 containerd[1459]: time="2025-09-12T17:32:57.681351289Z" level=info msg="StopPodSandbox for \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\"" Sep 12 17:32:57.688121 containerd[1459]: time="2025-09-12T17:32:57.688067396Z" level=info msg="StopPodSandbox for \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\"" Sep 12 17:32:57.691530 containerd[1459]: time="2025-09-12T17:32:57.691489146Z" level=info msg="Ensure that sandbox 8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea in task-service has been cleanup successfully" Sep 12 17:32:57.694338 containerd[1459]: time="2025-09-12T17:32:57.694309554Z" level=info msg="Ensure that sandbox 6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5 in task-service has been cleanup successfully" Sep 12 17:32:57.725972 containerd[1459]: time="2025-09-12T17:32:57.725906473Z" level=error msg="Failed to destroy network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.726355 containerd[1459]: time="2025-09-12T17:32:57.726328943Z" level=error msg="encountered an error cleaning up failed sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.726408 containerd[1459]: time="2025-09-12T17:32:57.726386823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbp9v,Uid:9139155a-95e6-4e1d-8bf7-6c551b8fbb1a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.727164 kubelet[2504]: E0912 17:32:57.727116 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.727246 kubelet[2504]: E0912 17:32:57.727217 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bbp9v" Sep 12 17:32:57.727281 kubelet[2504]: E0912 17:32:57.727251 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bbp9v" Sep 12 17:32:57.727342 kubelet[2504]: E0912 17:32:57.727307 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bbp9v_kube-system(9139155a-95e6-4e1d-8bf7-6c551b8fbb1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bbp9v_kube-system(9139155a-95e6-4e1d-8bf7-6c551b8fbb1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bbp9v" podUID="9139155a-95e6-4e1d-8bf7-6c551b8fbb1a" Sep 12 17:32:57.739594 containerd[1459]: time="2025-09-12T17:32:57.739521596Z" level=error msg="StopPodSandbox for \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\" failed" error="failed to destroy network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.739900 kubelet[2504]: E0912 17:32:57.739830 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:32:57.739986 kubelet[2504]: E0912 17:32:57.739933 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea"} Sep 12 17:32:57.740029 kubelet[2504]: E0912 17:32:57.740005 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c771cb80-b69d-4703-831e-b3564032fbcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:57.740098 kubelet[2504]: E0912 17:32:57.740032 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c771cb80-b69d-4703-831e-b3564032fbcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hlnxj" podUID="c771cb80-b69d-4703-831e-b3564032fbcd" Sep 12 17:32:57.740604 containerd[1459]: time="2025-09-12T17:32:57.740427884Z" level=error msg="StopPodSandbox for \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\" failed" error="failed to destroy network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.740643 kubelet[2504]: E0912 17:32:57.740585 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:32:57.740643 kubelet[2504]: E0912 17:32:57.740611 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5"} Sep 12 17:32:57.740643 kubelet[2504]: E0912 17:32:57.740632 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a81211fa-8f2f-44d0-8a0d-86148fe788e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:57.740735 kubelet[2504]: E0912 17:32:57.740649 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a81211fa-8f2f-44d0-8a0d-86148fe788e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b99944df-4v9pg" podUID="a81211fa-8f2f-44d0-8a0d-86148fe788e8" Sep 12 17:32:57.806890 containerd[1459]: time="2025-09-12T17:32:57.806814813Z" level=error msg="Failed to destroy network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.807346 containerd[1459]: time="2025-09-12T17:32:57.807281428Z" level=error msg="encountered an error cleaning up failed sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.807346 containerd[1459]: time="2025-09-12T17:32:57.807334900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778dc6895-fdvxx,Uid:9aeaba7e-9c59-470f-82df-9b9fbf5dd0db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.807570 kubelet[2504]: E0912 17:32:57.807528 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.807635 kubelet[2504]: E0912 17:32:57.807597 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6778dc6895-fdvxx" Sep 12 17:32:57.807635 kubelet[2504]: E0912 17:32:57.807620 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6778dc6895-fdvxx" Sep 12 17:32:57.807694 kubelet[2504]: E0912 17:32:57.807671 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6778dc6895-fdvxx_calico-apiserver(9aeaba7e-9c59-470f-82df-9b9fbf5dd0db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6778dc6895-fdvxx_calico-apiserver(9aeaba7e-9c59-470f-82df-9b9fbf5dd0db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6778dc6895-fdvxx" podUID="9aeaba7e-9c59-470f-82df-9b9fbf5dd0db" Sep 12 17:32:57.807885 containerd[1459]: time="2025-09-12T17:32:57.807821213Z" level=error msg="Failed to destroy network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.808259 containerd[1459]: time="2025-09-12T17:32:57.808220429Z" level=error msg="encountered an error cleaning up failed sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.808416 containerd[1459]: time="2025-09-12T17:32:57.808278650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-95vvq,Uid:52776af4-cb40-450c-a1f6-9e31fbaa89d8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.808591 kubelet[2504]: E0912 17:32:57.808542 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.808665 kubelet[2504]: E0912 17:32:57.808622 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-95vvq" Sep 12 17:32:57.808665 kubelet[2504]: E0912 17:32:57.808656 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-95vvq" Sep 12 17:32:57.808729 kubelet[2504]: E0912 17:32:57.808701 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-95vvq_calico-system(52776af4-cb40-450c-a1f6-9e31fbaa89d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-95vvq_calico-system(52776af4-cb40-450c-a1f6-9e31fbaa89d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-95vvq" podUID="52776af4-cb40-450c-a1f6-9e31fbaa89d8" Sep 12 17:32:57.881296 containerd[1459]: time="2025-09-12T17:32:57.881231659Z" level=error msg="Failed to destroy network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.882196 containerd[1459]: time="2025-09-12T17:32:57.881674357Z" level=error msg="encountered an error cleaning up failed sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.882196 containerd[1459]: time="2025-09-12T17:32:57.881736376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778dc6895-shqls,Uid:2bc8466e-fde1-48f6-8cd4-b09a878f3fd7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.882196 containerd[1459]: time="2025-09-12T17:32:57.882144909Z" level=error msg="Failed to destroy network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.882524 containerd[1459]: time="2025-09-12T17:32:57.882474231Z" level=error msg="encountered an error cleaning up failed sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.882524 containerd[1459]: time="2025-09-12T17:32:57.882517945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cb64f49f-nhnl8,Uid:7551bc19-475c-4a24-99ba-aad4f3dfccb8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.883514 kubelet[2504]: E0912 17:32:57.883291 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.883514 kubelet[2504]: E0912 17:32:57.883329 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:57.883514 kubelet[2504]: E0912 17:32:57.883374 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78cb64f49f-nhnl8" Sep 12 17:32:57.883514 kubelet[2504]: E0912 17:32:57.883398 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78cb64f49f-nhnl8" Sep 12 17:32:57.883648 kubelet[2504]: E0912 17:32:57.883404 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6778dc6895-shqls" Sep 12 17:32:57.883648 kubelet[2504]: E0912 17:32:57.883426 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6778dc6895-shqls" Sep 12 17:32:57.883648 kubelet[2504]: E0912 17:32:57.883449 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78cb64f49f-nhnl8_calico-system(7551bc19-475c-4a24-99ba-aad4f3dfccb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78cb64f49f-nhnl8_calico-system(7551bc19-475c-4a24-99ba-aad4f3dfccb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78cb64f49f-nhnl8" podUID="7551bc19-475c-4a24-99ba-aad4f3dfccb8" Sep 12 17:32:57.883747 kubelet[2504]: E0912 17:32:57.883471 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6778dc6895-shqls_calico-apiserver(2bc8466e-fde1-48f6-8cd4-b09a878f3fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6778dc6895-shqls_calico-apiserver(2bc8466e-fde1-48f6-8cd4-b09a878f3fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6778dc6895-shqls" podUID="2bc8466e-fde1-48f6-8cd4-b09a878f3fd7" Sep 12 17:32:57.885098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369-shm.mount: Deactivated successfully. Sep 12 17:32:57.885238 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492-shm.mount: Deactivated successfully. Sep 12 17:32:58.557584 systemd[1]: Created slice kubepods-besteffort-pod948ee8ec_ca22_47e2_8cb0_925053d685b4.slice - libcontainer container kubepods-besteffort-pod948ee8ec_ca22_47e2_8cb0_925053d685b4.slice. Sep 12 17:32:58.560111 containerd[1459]: time="2025-09-12T17:32:58.560070630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4vh8,Uid:948ee8ec-ca22-47e2-8cb0-925053d685b4,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:58.660579 kubelet[2504]: I0912 17:32:58.660538 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:32:58.662523 containerd[1459]: time="2025-09-12T17:32:58.662001471Z" level=info msg="StopPodSandbox for \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\"" Sep 12 17:32:58.662523 containerd[1459]: time="2025-09-12T17:32:58.662237723Z" level=info msg="Ensure that sandbox 9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea in task-service has been cleanup successfully" Sep 12 17:32:58.663171 kubelet[2504]: I0912 17:32:58.663145 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:32:58.663814 containerd[1459]: time="2025-09-12T17:32:58.663782975Z" level=info msg="StopPodSandbox for \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\"" Sep 12 17:32:58.664180 containerd[1459]: time="2025-09-12T17:32:58.664144578Z" level=info msg="Ensure that sandbox e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54 in task-service has been cleanup successfully" Sep 12 17:32:58.666785 kubelet[2504]: I0912 17:32:58.665684 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:32:58.667104 containerd[1459]: time="2025-09-12T17:32:58.667069202Z" level=info msg="StopPodSandbox for \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\"" Sep 12 17:32:58.667427 containerd[1459]: time="2025-09-12T17:32:58.667399514Z" level=info msg="Ensure that sandbox ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369 in task-service has been cleanup successfully" Sep 12 17:32:58.669591 kubelet[2504]: I0912 17:32:58.669539 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:32:58.670119 containerd[1459]: time="2025-09-12T17:32:58.670089158Z" level=info msg="StopPodSandbox for \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\"" Sep 12 17:32:58.670387 containerd[1459]: time="2025-09-12T17:32:58.670363735Z" level=info msg="Ensure that sandbox 28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492 in task-service has been cleanup successfully" Sep 12 17:32:58.672126 kubelet[2504]: I0912 17:32:58.672095 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:32:58.672894 containerd[1459]: time="2025-09-12T17:32:58.672687578Z" level=info msg="StopPodSandbox for \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\"" Sep 12 17:32:58.673145 containerd[1459]: time="2025-09-12T17:32:58.673120738Z" level=info msg="Ensure that sandbox 3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c in task-service has been cleanup successfully" Sep 12 17:32:58.723228 containerd[1459]: time="2025-09-12T17:32:58.723153200Z" level=error msg="StopPodSandbox for \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\" failed" error="failed to destroy network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:58.723473 kubelet[2504]: E0912 17:32:58.723427 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:32:58.723538 kubelet[2504]: E0912 17:32:58.723487 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369"} Sep 12 17:32:58.723538 kubelet[2504]: E0912 17:32:58.723528 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2bc8466e-fde1-48f6-8cd4-b09a878f3fd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:58.723688 kubelet[2504]: E0912 17:32:58.723558 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2bc8466e-fde1-48f6-8cd4-b09a878f3fd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6778dc6895-shqls" podUID="2bc8466e-fde1-48f6-8cd4-b09a878f3fd7" Sep 12 17:32:58.748680 containerd[1459]: time="2025-09-12T17:32:58.745253589Z" level=error msg="StopPodSandbox for \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\" failed" error="failed to destroy network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:58.748680 containerd[1459]: time="2025-09-12T17:32:58.745379421Z" level=error msg="Failed to destroy network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:58.748680 containerd[1459]: time="2025-09-12T17:32:58.746559902Z" level=error msg="encountered an error cleaning up failed sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:58.748680 containerd[1459]: time="2025-09-12T17:32:58.746613065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4vh8,Uid:948ee8ec-ca22-47e2-8cb0-925053d685b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:58.748680 containerd[1459]: time="2025-09-12T17:32:58.747108904Z" level=error msg="StopPodSandbox for \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\" failed" error="failed to destroy network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:58.749009 kubelet[2504]: E0912 17:32:58.746482 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:32:58.749009 kubelet[2504]: E0912 17:32:58.746557 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54"} Sep 12 17:32:58.749009 kubelet[2504]: E0912 17:32:58.746635 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9aeaba7e-9c59-470f-82df-9b9fbf5dd0db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:58.749009 kubelet[2504]: E0912 17:32:58.746692 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9aeaba7e-9c59-470f-82df-9b9fbf5dd0db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6778dc6895-fdvxx" podUID="9aeaba7e-9c59-470f-82df-9b9fbf5dd0db" Sep 12 17:32:58.749254 kubelet[2504]: E0912 17:32:58.746817 2504 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:58.749254 kubelet[2504]: E0912 17:32:58.746901 2504 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4vh8" Sep 12 17:32:58.749254 kubelet[2504]: E0912 17:32:58.746922 2504 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x4vh8" Sep 12 17:32:58.749254 kubelet[2504]: E0912 17:32:58.747282 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:32:58.749254 kubelet[2504]: E0912 17:32:58.747317 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea"} Sep 12 17:32:58.749407 kubelet[2504]: E0912 17:32:58.747349 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52776af4-cb40-450c-a1f6-9e31fbaa89d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:58.749407 kubelet[2504]: E0912 17:32:58.747371 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52776af4-cb40-450c-a1f6-9e31fbaa89d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-95vvq" podUID="52776af4-cb40-450c-a1f6-9e31fbaa89d8" Sep 12 17:32:58.749407 kubelet[2504]: E0912 17:32:58.748061 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x4vh8_calico-system(948ee8ec-ca22-47e2-8cb0-925053d685b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x4vh8_calico-system(948ee8ec-ca22-47e2-8cb0-925053d685b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x4vh8" podUID="948ee8ec-ca22-47e2-8cb0-925053d685b4" Sep 12 17:32:58.755529 containerd[1459]: time="2025-09-12T17:32:58.755428978Z" level=error msg="StopPodSandbox for \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\" failed" error="failed to destroy network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:58.755694 kubelet[2504]: E0912 17:32:58.755661 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:32:58.755748 kubelet[2504]: E0912 17:32:58.755707 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492"} Sep 12 17:32:58.755748 kubelet[2504]: E0912 17:32:58.755743 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:58.755877 kubelet[2504]: E0912 17:32:58.755767 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78cb64f49f-nhnl8" podUID="7551bc19-475c-4a24-99ba-aad4f3dfccb8" Sep 12 17:32:58.755992 containerd[1459]: time="2025-09-12T17:32:58.755942331Z" level=error msg="StopPodSandbox for \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\" failed" error="failed to destroy network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:58.756128 kubelet[2504]: E0912 17:32:58.756090 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:32:58.756178 kubelet[2504]: E0912 17:32:58.756130 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c"} Sep 12 17:32:58.756178 kubelet[2504]: E0912 17:32:58.756153 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9139155a-95e6-4e1d-8bf7-6c551b8fbb1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:58.756178 kubelet[2504]: E0912 17:32:58.756170 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9139155a-95e6-4e1d-8bf7-6c551b8fbb1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bbp9v" podUID="9139155a-95e6-4e1d-8bf7-6c551b8fbb1a" Sep 12 17:32:58.811361 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2-shm.mount: Deactivated successfully. Sep 12 17:32:59.311524 kubelet[2504]: I0912 17:32:59.311434 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:32:59.311967 kubelet[2504]: E0912 17:32:59.311929 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:59.674731 kubelet[2504]: I0912 17:32:59.674598 2504 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:32:59.675268 kubelet[2504]: E0912 17:32:59.674963 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:59.675316 containerd[1459]: time="2025-09-12T17:32:59.675175517Z" level=info msg="StopPodSandbox for \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\"" Sep 12 17:32:59.675597 containerd[1459]: time="2025-09-12T17:32:59.675360462Z" level=info msg="Ensure that sandbox f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2 in task-service has been cleanup successfully" Sep 12 17:32:59.701667 containerd[1459]: time="2025-09-12T17:32:59.701606610Z" level=error msg="StopPodSandbox for \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\" failed" error="failed to destroy network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:59.701939 kubelet[2504]: E0912 17:32:59.701891 2504 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:32:59.702021 kubelet[2504]: E0912 17:32:59.701938 2504 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2"} Sep 12 17:32:59.702021 kubelet[2504]: E0912 17:32:59.701978 2504 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"948ee8ec-ca22-47e2-8cb0-925053d685b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:59.702021 kubelet[2504]: E0912 17:32:59.702002 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"948ee8ec-ca22-47e2-8cb0-925053d685b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x4vh8" podUID="948ee8ec-ca22-47e2-8cb0-925053d685b4" Sep 12 17:33:05.140199 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:47294.service - OpenSSH per-connection server daemon (10.0.0.1:47294). Sep 12 17:33:05.280782 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 47294 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:05.282730 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:05.294978 systemd-logind[1442]: New session 8 of user core. Sep 12 17:33:05.301025 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:33:05.464743 sshd[3742]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:05.470318 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:47294.service: Deactivated successfully. Sep 12 17:33:05.473161 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:33:05.474725 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:33:05.476369 systemd-logind[1442]: Removed session 8. Sep 12 17:33:06.795540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386402444.mount: Deactivated successfully. Sep 12 17:33:08.457256 containerd[1459]: time="2025-09-12T17:33:08.457160772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:08.458076 containerd[1459]: time="2025-09-12T17:33:08.458031895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:33:08.470614 containerd[1459]: time="2025-09-12T17:33:08.470558816Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:08.471536 containerd[1459]: time="2025-09-12T17:33:08.471478913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.823441634s" Sep 12 17:33:08.471593 containerd[1459]: time="2025-09-12T17:33:08.471537445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:33:08.478891 containerd[1459]: time="2025-09-12T17:33:08.471982494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:08.483652 containerd[1459]: time="2025-09-12T17:33:08.483471625Z" level=info msg="CreateContainer within sandbox \"0d0deee9d29b5ec3e31e0f2f8ed35708852feb5aae0e8938a67d1b592115bf37\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:33:08.515693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81107256.mount: Deactivated successfully. Sep 12 17:33:08.523138 containerd[1459]: time="2025-09-12T17:33:08.523051145Z" level=info msg="CreateContainer within sandbox \"0d0deee9d29b5ec3e31e0f2f8ed35708852feb5aae0e8938a67d1b592115bf37\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ec0b30a11231a377aaf1daa60540fdc33879a81112e23b29da5663ef7fd38fd\"" Sep 12 17:33:08.523960 containerd[1459]: time="2025-09-12T17:33:08.523897581Z" level=info msg="StartContainer for \"9ec0b30a11231a377aaf1daa60540fdc33879a81112e23b29da5663ef7fd38fd\"" Sep 12 17:33:08.583152 systemd[1]: Started cri-containerd-9ec0b30a11231a377aaf1daa60540fdc33879a81112e23b29da5663ef7fd38fd.scope - libcontainer container 9ec0b30a11231a377aaf1daa60540fdc33879a81112e23b29da5663ef7fd38fd. Sep 12 17:33:09.037368 containerd[1459]: time="2025-09-12T17:33:09.036608205Z" level=info msg="StartContainer for \"9ec0b30a11231a377aaf1daa60540fdc33879a81112e23b29da5663ef7fd38fd\" returns successfully" Sep 12 17:33:09.073658 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:33:09.074357 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:33:09.553176 containerd[1459]: time="2025-09-12T17:33:09.553052801Z" level=info msg="StopPodSandbox for \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\"" Sep 12 17:33:09.660231 kubelet[2504]: I0912 17:33:09.659548 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5n26d" podStartSLOduration=1.951439851 podStartE2EDuration="25.659498752s" podCreationTimestamp="2025-09-12 17:32:44 +0000 UTC" firstStartedPulling="2025-09-12 17:32:44.764416604 +0000 UTC m=+21.306370789" lastFinishedPulling="2025-09-12 17:33:08.472475505 +0000 UTC m=+45.014429690" observedRunningTime="2025-09-12 17:33:09.651939227 +0000 UTC m=+46.193893432" watchObservedRunningTime="2025-09-12 17:33:09.659498752 +0000 UTC m=+46.201452937" Sep 12 17:33:09.662907 containerd[1459]: time="2025-09-12T17:33:09.662093153Z" level=info msg="StopPodSandbox for \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\"" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.699 [INFO][3844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.699 [INFO][3844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" iface="eth0" netns="/var/run/netns/cni-10a231cc-7e1a-75cb-d00d-437587f0ddcc" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.699 [INFO][3844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" iface="eth0" netns="/var/run/netns/cni-10a231cc-7e1a-75cb-d00d-437587f0ddcc" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.701 [INFO][3844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" iface="eth0" netns="/var/run/netns/cni-10a231cc-7e1a-75cb-d00d-437587f0ddcc" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.701 [INFO][3844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.701 [INFO][3844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.867 [INFO][3880] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" HandleID="k8s-pod-network.6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.868 [INFO][3880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.868 [INFO][3880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.878 [WARNING][3880] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" HandleID="k8s-pod-network.6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.878 [INFO][3880] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" HandleID="k8s-pod-network.6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.879 [INFO][3880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:09.889188 containerd[1459]: 2025-09-12 17:33:09.885 [INFO][3844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:09.890275 containerd[1459]: time="2025-09-12T17:33:09.890001382Z" level=info msg="TearDown network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\" successfully" Sep 12 17:33:09.890275 containerd[1459]: time="2025-09-12T17:33:09.890050687Z" level=info msg="StopPodSandbox for \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\" returns successfully" Sep 12 17:33:09.893838 systemd[1]: run-netns-cni\x2d10a231cc\x2d7e1a\x2d75cb\x2dd00d\x2d437587f0ddcc.mount: Deactivated successfully. Sep 12 17:33:09.894561 containerd[1459]: time="2025-09-12T17:33:09.894519364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b99944df-4v9pg,Uid:a81211fa-8f2f-44d0-8a0d-86148fe788e8,Namespace:calico-system,Attempt:1,}" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.746 [INFO][3872] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.746 [INFO][3872] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" iface="eth0" netns="/var/run/netns/cni-7435f5f7-7d51-339f-f148-242e026afcb5" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.746 [INFO][3872] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" iface="eth0" netns="/var/run/netns/cni-7435f5f7-7d51-339f-f148-242e026afcb5" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.747 [INFO][3872] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" iface="eth0" netns="/var/run/netns/cni-7435f5f7-7d51-339f-f148-242e026afcb5" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.747 [INFO][3872] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.747 [INFO][3872] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.867 [INFO][3889] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" HandleID="k8s-pod-network.28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Workload="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.868 [INFO][3889] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.880 [INFO][3889] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.885 [WARNING][3889] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" HandleID="k8s-pod-network.28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Workload="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.885 [INFO][3889] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" HandleID="k8s-pod-network.28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Workload="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.887 [INFO][3889] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:09.896749 containerd[1459]: 2025-09-12 17:33:09.891 [INFO][3872] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:09.897114 containerd[1459]: time="2025-09-12T17:33:09.896952126Z" level=info msg="TearDown network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\" successfully" Sep 12 17:33:09.897114 containerd[1459]: time="2025-09-12T17:33:09.896982314Z" level=info msg="StopPodSandbox for \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\" returns successfully" Sep 12 17:33:09.899877 systemd[1]: run-netns-cni\x2d7435f5f7\x2d7d51\x2d339f\x2df148\x2d242e026afcb5.mount: Deactivated successfully. Sep 12 17:33:09.971175 kubelet[2504]: I0912 17:33:09.971107 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7551bc19-475c-4a24-99ba-aad4f3dfccb8-whisker-ca-bundle\") pod \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\" (UID: \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\") " Sep 12 17:33:09.971175 kubelet[2504]: I0912 17:33:09.971170 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7551bc19-475c-4a24-99ba-aad4f3dfccb8-whisker-backend-key-pair\") pod \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\" (UID: \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\") " Sep 12 17:33:09.971401 kubelet[2504]: I0912 17:33:09.971197 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5pbg\" (UniqueName: \"kubernetes.io/projected/7551bc19-475c-4a24-99ba-aad4f3dfccb8-kube-api-access-t5pbg\") pod \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\" (UID: \"7551bc19-475c-4a24-99ba-aad4f3dfccb8\") " Sep 12 17:33:09.974385 kubelet[2504]: I0912 17:33:09.974338 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7551bc19-475c-4a24-99ba-aad4f3dfccb8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7551bc19-475c-4a24-99ba-aad4f3dfccb8" (UID: "7551bc19-475c-4a24-99ba-aad4f3dfccb8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:33:09.981725 systemd[1]: var-lib-kubelet-pods-7551bc19\x2d475c\x2d4a24\x2d99ba\x2daad4f3dfccb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt5pbg.mount: Deactivated successfully. Sep 12 17:33:09.981872 systemd[1]: var-lib-kubelet-pods-7551bc19\x2d475c\x2d4a24\x2d99ba\x2daad4f3dfccb8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:33:09.983372 kubelet[2504]: I0912 17:33:09.982443 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7551bc19-475c-4a24-99ba-aad4f3dfccb8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7551bc19-475c-4a24-99ba-aad4f3dfccb8" (UID: "7551bc19-475c-4a24-99ba-aad4f3dfccb8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:33:09.983372 kubelet[2504]: I0912 17:33:09.983182 2504 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7551bc19-475c-4a24-99ba-aad4f3dfccb8-kube-api-access-t5pbg" (OuterVolumeSpecName: "kube-api-access-t5pbg") pod "7551bc19-475c-4a24-99ba-aad4f3dfccb8" (UID: "7551bc19-475c-4a24-99ba-aad4f3dfccb8"). InnerVolumeSpecName "kube-api-access-t5pbg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:33:10.072040 kubelet[2504]: I0912 17:33:10.071979 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7551bc19-475c-4a24-99ba-aad4f3dfccb8-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:10.072040 kubelet[2504]: I0912 17:33:10.072021 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7551bc19-475c-4a24-99ba-aad4f3dfccb8-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:10.072040 kubelet[2504]: I0912 17:33:10.072032 2504 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t5pbg\" (UniqueName: \"kubernetes.io/projected/7551bc19-475c-4a24-99ba-aad4f3dfccb8-kube-api-access-t5pbg\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:10.228930 systemd-networkd[1401]: cali038598ee7cf: Link UP Sep 12 17:33:10.229373 systemd-networkd[1401]: cali038598ee7cf: Gained carrier Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.086 [INFO][3912] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.104 [INFO][3912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0 calico-kube-controllers-8b99944df- calico-system a81211fa-8f2f-44d0-8a0d-86148fe788e8 977 0 2025-09-12 17:32:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8b99944df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8b99944df-4v9pg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali038598ee7cf [] [] }} ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Namespace="calico-system" Pod="calico-kube-controllers-8b99944df-4v9pg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.105 [INFO][3912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Namespace="calico-system" Pod="calico-kube-controllers-8b99944df-4v9pg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.130 [INFO][3928] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" HandleID="k8s-pod-network.3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.130 [INFO][3928] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" HandleID="k8s-pod-network.3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8b99944df-4v9pg", "timestamp":"2025-09-12 17:33:10.130618642 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.130 [INFO][3928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.130 [INFO][3928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.131 [INFO][3928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.145 [INFO][3928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" host="localhost" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.150 [INFO][3928] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.154 [INFO][3928] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.156 [INFO][3928] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.158 [INFO][3928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.158 [INFO][3928] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" host="localhost" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.159 [INFO][3928] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3 Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.177 [INFO][3928] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" host="localhost" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.199 [INFO][3928] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" host="localhost" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.199 [INFO][3928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" host="localhost" Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.199 [INFO][3928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:10.341995 containerd[1459]: 2025-09-12 17:33:10.199 [INFO][3928] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" HandleID="k8s-pod-network.3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:10.342912 containerd[1459]: 2025-09-12 17:33:10.202 [INFO][3912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Namespace="calico-system" Pod="calico-kube-controllers-8b99944df-4v9pg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0", GenerateName:"calico-kube-controllers-8b99944df-", Namespace:"calico-system", SelfLink:"", UID:"a81211fa-8f2f-44d0-8a0d-86148fe788e8", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b99944df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8b99944df-4v9pg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali038598ee7cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:10.342912 containerd[1459]: 2025-09-12 17:33:10.202 [INFO][3912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Namespace="calico-system" Pod="calico-kube-controllers-8b99944df-4v9pg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:10.342912 containerd[1459]: 2025-09-12 17:33:10.202 [INFO][3912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali038598ee7cf ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Namespace="calico-system" Pod="calico-kube-controllers-8b99944df-4v9pg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:10.342912 containerd[1459]: 2025-09-12 17:33:10.232 [INFO][3912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Namespace="calico-system" Pod="calico-kube-controllers-8b99944df-4v9pg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:10.342912 containerd[1459]: 2025-09-12 17:33:10.233 [INFO][3912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Namespace="calico-system" Pod="calico-kube-controllers-8b99944df-4v9pg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0", GenerateName:"calico-kube-controllers-8b99944df-", Namespace:"calico-system", SelfLink:"", UID:"a81211fa-8f2f-44d0-8a0d-86148fe788e8", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b99944df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3", Pod:"calico-kube-controllers-8b99944df-4v9pg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali038598ee7cf", MAC:"22:ba:a8:73:6a:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:10.342912 containerd[1459]: 2025-09-12 17:33:10.338 [INFO][3912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3" Namespace="calico-system" Pod="calico-kube-controllers-8b99944df-4v9pg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:10.378552 containerd[1459]: time="2025-09-12T17:33:10.378458059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:10.378552 containerd[1459]: time="2025-09-12T17:33:10.378513615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:10.378552 containerd[1459]: time="2025-09-12T17:33:10.378526379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:10.378747 containerd[1459]: time="2025-09-12T17:33:10.378656848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:10.400055 systemd[1]: Started cri-containerd-3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3.scope - libcontainer container 3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3. Sep 12 17:33:10.412311 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:33:10.435993 containerd[1459]: time="2025-09-12T17:33:10.435950044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b99944df-4v9pg,Uid:a81211fa-8f2f-44d0-8a0d-86148fe788e8,Namespace:calico-system,Attempt:1,} returns sandbox id \"3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3\"" Sep 12 17:33:10.438218 containerd[1459]: time="2025-09-12T17:33:10.438186510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:33:10.497094 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:47850.service - OpenSSH per-connection server daemon (10.0.0.1:47850). Sep 12 17:33:10.514393 systemd[1]: Removed slice kubepods-besteffort-pod7551bc19_475c_4a24_99ba_aad4f3dfccb8.slice - libcontainer container kubepods-besteffort-pod7551bc19_475c_4a24_99ba_aad4f3dfccb8.slice. Sep 12 17:33:10.543886 sshd[3986]: Accepted publickey for core from 10.0.0.1 port 47850 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:10.545727 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:10.553218 containerd[1459]: time="2025-09-12T17:33:10.553188565Z" level=info msg="StopPodSandbox for \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\"" Sep 12 17:33:10.554102 containerd[1459]: time="2025-09-12T17:33:10.553916684Z" level=info msg="StopPodSandbox for \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\"" Sep 12 17:33:10.556600 systemd-logind[1442]: New session 9 of user core. Sep 12 17:33:10.563164 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:33:10.746798 sshd[3986]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:10.752597 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:47850.service: Deactivated successfully. Sep 12 17:33:10.755134 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:33:10.758532 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:33:10.760973 systemd-logind[1442]: Removed session 9. Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.734 [INFO][4026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.734 [INFO][4026] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" iface="eth0" netns="/var/run/netns/cni-2710ef38-1db6-a010-2a25-f426c4ab3858" Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.734 [INFO][4026] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" iface="eth0" netns="/var/run/netns/cni-2710ef38-1db6-a010-2a25-f426c4ab3858" Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.735 [INFO][4026] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" iface="eth0" netns="/var/run/netns/cni-2710ef38-1db6-a010-2a25-f426c4ab3858" Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.735 [INFO][4026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.735 [INFO][4026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.766 [INFO][4061] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" HandleID="k8s-pod-network.3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.767 [INFO][4061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.767 [INFO][4061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.786 [WARNING][4061] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" HandleID="k8s-pod-network.3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.786 [INFO][4061] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" HandleID="k8s-pod-network.3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.789 [INFO][4061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:10.795284 containerd[1459]: 2025-09-12 17:33:10.792 [INFO][4026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:10.798300 containerd[1459]: time="2025-09-12T17:33:10.795558256Z" level=info msg="TearDown network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\" successfully" Sep 12 17:33:10.798300 containerd[1459]: time="2025-09-12T17:33:10.795588303Z" level=info msg="StopPodSandbox for \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\" returns successfully" Sep 12 17:33:10.798300 containerd[1459]: time="2025-09-12T17:33:10.797104256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbp9v,Uid:9139155a-95e6-4e1d-8bf7-6c551b8fbb1a,Namespace:kube-system,Attempt:1,}" Sep 12 17:33:10.798398 kubelet[2504]: E0912 17:33:10.796080 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:10.800459 systemd[1]: run-netns-cni\x2d2710ef38\x2d1db6\x2da010\x2d2a25\x2df426c4ab3858.mount: Deactivated successfully. Sep 12 17:33:10.809580 systemd[1]: Created slice kubepods-besteffort-pod30a14ef5_5e10_492f_bfc1_0edd33e989e9.slice - libcontainer container kubepods-besteffort-pod30a14ef5_5e10_492f_bfc1_0edd33e989e9.slice. Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.734 [INFO][4031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.735 [INFO][4031] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" iface="eth0" netns="/var/run/netns/cni-aa2b666b-b3b4-de74-c79d-82b3678d2593" Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.735 [INFO][4031] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" iface="eth0" netns="/var/run/netns/cni-aa2b666b-b3b4-de74-c79d-82b3678d2593" Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.735 [INFO][4031] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" iface="eth0" netns="/var/run/netns/cni-aa2b666b-b3b4-de74-c79d-82b3678d2593" Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.735 [INFO][4031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.735 [INFO][4031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.772 [INFO][4060] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" HandleID="k8s-pod-network.9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.773 [INFO][4060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.789 [INFO][4060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.800 [WARNING][4060] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" HandleID="k8s-pod-network.9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.800 [INFO][4060] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" HandleID="k8s-pod-network.9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.814 [INFO][4060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:10.821639 containerd[1459]: 2025-09-12 17:33:10.818 [INFO][4031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:10.822121 containerd[1459]: time="2025-09-12T17:33:10.821803768Z" level=info msg="TearDown network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\" successfully" Sep 12 17:33:10.822121 containerd[1459]: time="2025-09-12T17:33:10.821834827Z" level=info msg="StopPodSandbox for \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\" returns successfully" Sep 12 17:33:10.823166 containerd[1459]: time="2025-09-12T17:33:10.823102997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-95vvq,Uid:52776af4-cb40-450c-a1f6-9e31fbaa89d8,Namespace:calico-system,Attempt:1,}" Sep 12 17:33:10.827005 systemd[1]: run-netns-cni\x2daa2b666b\x2db3b4\x2dde74\x2dc79d\x2d82b3678d2593.mount: Deactivated successfully. Sep 12 17:33:10.978600 systemd-networkd[1401]: cali476f6c75536: Link UP Sep 12 17:33:10.980925 systemd-networkd[1401]: cali476f6c75536: Gained carrier Sep 12 17:33:10.982346 kubelet[2504]: I0912 17:33:10.982215 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30a14ef5-5e10-492f-bfc1-0edd33e989e9-whisker-backend-key-pair\") pod \"whisker-8496b6457d-dq9gb\" (UID: \"30a14ef5-5e10-492f-bfc1-0edd33e989e9\") " pod="calico-system/whisker-8496b6457d-dq9gb" Sep 12 17:33:10.982785 kubelet[2504]: I0912 17:33:10.982584 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a14ef5-5e10-492f-bfc1-0edd33e989e9-whisker-ca-bundle\") pod \"whisker-8496b6457d-dq9gb\" (UID: \"30a14ef5-5e10-492f-bfc1-0edd33e989e9\") " pod="calico-system/whisker-8496b6457d-dq9gb" Sep 12 17:33:10.982785 kubelet[2504]: I0912 17:33:10.982627 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thlvr\" (UniqueName: \"kubernetes.io/projected/30a14ef5-5e10-492f-bfc1-0edd33e989e9-kube-api-access-thlvr\") pod \"whisker-8496b6457d-dq9gb\" (UID: \"30a14ef5-5e10-492f-bfc1-0edd33e989e9\") " pod="calico-system/whisker-8496b6457d-dq9gb" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.859 [INFO][4077] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.871 [INFO][4077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0 coredns-668d6bf9bc- kube-system 9139155a-95e6-4e1d-8bf7-6c551b8fbb1a 1003 0 2025-09-12 17:32:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-bbp9v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali476f6c75536 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbp9v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bbp9v-" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.871 [INFO][4077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbp9v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.914 [INFO][4104] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" HandleID="k8s-pod-network.e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.914 [INFO][4104] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" HandleID="k8s-pod-network.e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-bbp9v", "timestamp":"2025-09-12 17:33:10.914108691 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.914 [INFO][4104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.914 [INFO][4104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.914 [INFO][4104] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.924 [INFO][4104] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" host="localhost" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.933 [INFO][4104] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.940 [INFO][4104] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.943 [INFO][4104] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.946 [INFO][4104] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.946 [INFO][4104] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" host="localhost" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.948 [INFO][4104] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855 Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.954 [INFO][4104] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" host="localhost" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.962 [INFO][4104] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" host="localhost" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.962 [INFO][4104] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" host="localhost" Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.962 [INFO][4104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:11.001199 containerd[1459]: 2025-09-12 17:33:10.963 [INFO][4104] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" HandleID="k8s-pod-network.e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:11.003216 containerd[1459]: 2025-09-12 17:33:10.972 [INFO][4077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbp9v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9139155a-95e6-4e1d-8bf7-6c551b8fbb1a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-bbp9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali476f6c75536", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:11.003216 containerd[1459]: 2025-09-12 17:33:10.972 [INFO][4077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbp9v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:11.003216 containerd[1459]: 2025-09-12 17:33:10.972 [INFO][4077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali476f6c75536 ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbp9v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:11.003216 containerd[1459]: 2025-09-12 17:33:10.982 [INFO][4077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbp9v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:11.003216 containerd[1459]: 2025-09-12 17:33:10.983 [INFO][4077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbp9v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9139155a-95e6-4e1d-8bf7-6c551b8fbb1a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855", Pod:"coredns-668d6bf9bc-bbp9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali476f6c75536", MAC:"a6:8e:d4:e7:87:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:11.003216 containerd[1459]: 2025-09-12 17:33:10.995 [INFO][4077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855" Namespace="kube-system" Pod="coredns-668d6bf9bc-bbp9v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:11.053724 containerd[1459]: time="2025-09-12T17:33:11.052789046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:11.053724 containerd[1459]: time="2025-09-12T17:33:11.052891621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:11.053724 containerd[1459]: time="2025-09-12T17:33:11.052906690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:11.053724 containerd[1459]: time="2025-09-12T17:33:11.053069170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:11.087250 systemd[1]: Started cri-containerd-e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855.scope - libcontainer container e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855. Sep 12 17:33:11.119317 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:33:11.154483 systemd-networkd[1401]: calid1de226b9e3: Link UP Sep 12 17:33:11.157466 systemd-networkd[1401]: calid1de226b9e3: Gained carrier Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:10.878 [INFO][4089] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:10.892 [INFO][4089] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--95vvq-eth0 goldmane-54d579b49d- calico-system 52776af4-cb40-450c-a1f6-9e31fbaa89d8 1002 0 2025-09-12 17:32:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-95vvq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid1de226b9e3 [] [] }} ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Namespace="calico-system" Pod="goldmane-54d579b49d-95vvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--95vvq-" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:10.892 [INFO][4089] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Namespace="calico-system" Pod="goldmane-54d579b49d-95vvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:10.950 [INFO][4127] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" HandleID="k8s-pod-network.1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:10.950 [INFO][4127] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" HandleID="k8s-pod-network.1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aa450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-95vvq", "timestamp":"2025-09-12 17:33:10.950454622 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:10.951 [INFO][4127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:10.962 [INFO][4127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:10.964 [INFO][4127] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.026 [INFO][4127] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" host="localhost" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.035 [INFO][4127] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.047 [INFO][4127] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.050 [INFO][4127] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.052 [INFO][4127] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.053 [INFO][4127] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" host="localhost" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.055 [INFO][4127] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.128 [INFO][4127] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" host="localhost" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.136 [INFO][4127] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" host="localhost" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.136 [INFO][4127] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" host="localhost" Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.136 [INFO][4127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:11.186387 containerd[1459]: 2025-09-12 17:33:11.136 [INFO][4127] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" HandleID="k8s-pod-network.1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:11.186989 containerd[1459]: 2025-09-12 17:33:11.147 [INFO][4089] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Namespace="calico-system" Pod="goldmane-54d579b49d-95vvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--95vvq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"52776af4-cb40-450c-a1f6-9e31fbaa89d8", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-95vvq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1de226b9e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:11.186989 containerd[1459]: 2025-09-12 17:33:11.148 [INFO][4089] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Namespace="calico-system" Pod="goldmane-54d579b49d-95vvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:11.186989 containerd[1459]: 2025-09-12 17:33:11.148 [INFO][4089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1de226b9e3 ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Namespace="calico-system" Pod="goldmane-54d579b49d-95vvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:11.186989 containerd[1459]: 2025-09-12 17:33:11.158 [INFO][4089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Namespace="calico-system" Pod="goldmane-54d579b49d-95vvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:11.186989 containerd[1459]: 2025-09-12 17:33:11.162 [INFO][4089] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Namespace="calico-system" Pod="goldmane-54d579b49d-95vvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--95vvq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"52776af4-cb40-450c-a1f6-9e31fbaa89d8", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b", Pod:"goldmane-54d579b49d-95vvq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1de226b9e3", MAC:"ba:1b:05:16:6d:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:11.186989 containerd[1459]: 2025-09-12 17:33:11.181 [INFO][4089] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b" Namespace="calico-system" Pod="goldmane-54d579b49d-95vvq" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:11.209816 containerd[1459]: time="2025-09-12T17:33:11.209125169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bbp9v,Uid:9139155a-95e6-4e1d-8bf7-6c551b8fbb1a,Namespace:kube-system,Attempt:1,} returns sandbox id \"e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855\"" Sep 12 17:33:11.218482 kubelet[2504]: E0912 17:33:11.217155 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:11.221251 containerd[1459]: time="2025-09-12T17:33:11.221204318Z" level=info msg="CreateContainer within sandbox \"e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:33:11.256018 containerd[1459]: time="2025-09-12T17:33:11.253802315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:11.256018 containerd[1459]: time="2025-09-12T17:33:11.253873080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:11.256018 containerd[1459]: time="2025-09-12T17:33:11.253883831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:11.256018 containerd[1459]: time="2025-09-12T17:33:11.253987278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:11.261219 containerd[1459]: time="2025-09-12T17:33:11.261164198Z" level=info msg="CreateContainer within sandbox \"e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38016a2aad4bd856cc3a8631b4625fe223bcad007715f6bce06ab0ec7a6939c6\"" Sep 12 17:33:11.262833 containerd[1459]: time="2025-09-12T17:33:11.262137463Z" level=info msg="StartContainer for \"38016a2aad4bd856cc3a8631b4625fe223bcad007715f6bce06ab0ec7a6939c6\"" Sep 12 17:33:11.276245 systemd[1]: Started cri-containerd-1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b.scope - libcontainer container 1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b. Sep 12 17:33:11.290884 kernel: bpftool[4350]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:33:11.300115 systemd[1]: Started cri-containerd-38016a2aad4bd856cc3a8631b4625fe223bcad007715f6bce06ab0ec7a6939c6.scope - libcontainer container 38016a2aad4bd856cc3a8631b4625fe223bcad007715f6bce06ab0ec7a6939c6. Sep 12 17:33:11.312233 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:33:11.349094 containerd[1459]: time="2025-09-12T17:33:11.348992179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-95vvq,Uid:52776af4-cb40-450c-a1f6-9e31fbaa89d8,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b\"" Sep 12 17:33:11.364148 containerd[1459]: time="2025-09-12T17:33:11.364082662Z" level=info msg="StartContainer for \"38016a2aad4bd856cc3a8631b4625fe223bcad007715f6bce06ab0ec7a6939c6\" returns successfully" Sep 12 17:33:11.413708 containerd[1459]: time="2025-09-12T17:33:11.413645217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8496b6457d-dq9gb,Uid:30a14ef5-5e10-492f-bfc1-0edd33e989e9,Namespace:calico-system,Attempt:0,}" Sep 12 17:33:11.456257 systemd-networkd[1401]: cali038598ee7cf: Gained IPv6LL Sep 12 17:33:11.514881 kubelet[2504]: E0912 17:33:11.514832 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:11.539115 kubelet[2504]: I0912 17:33:11.539041 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bbp9v" podStartSLOduration=41.539018563 podStartE2EDuration="41.539018563s" podCreationTimestamp="2025-09-12 17:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:11.538978797 +0000 UTC m=+48.080933002" watchObservedRunningTime="2025-09-12 17:33:11.539018563 +0000 UTC m=+48.080972748" Sep 12 17:33:11.555207 containerd[1459]: time="2025-09-12T17:33:11.552669410Z" level=info msg="StopPodSandbox for \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\"" Sep 12 17:33:11.557406 kubelet[2504]: I0912 17:33:11.557373 2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7551bc19-475c-4a24-99ba-aad4f3dfccb8" path="/var/lib/kubelet/pods/7551bc19-475c-4a24-99ba-aad4f3dfccb8/volumes" Sep 12 17:33:11.627660 systemd-networkd[1401]: vxlan.calico: Link UP Sep 12 17:33:11.627671 systemd-networkd[1401]: vxlan.calico: Gained carrier Sep 12 17:33:11.678377 systemd-networkd[1401]: calibc9dabbaf92: Link UP Sep 12 17:33:11.680696 systemd-networkd[1401]: calibc9dabbaf92: Gained carrier Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.474 [INFO][4381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8496b6457d--dq9gb-eth0 whisker-8496b6457d- calico-system 30a14ef5-5e10-492f-bfc1-0edd33e989e9 1019 0 2025-09-12 17:33:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8496b6457d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8496b6457d-dq9gb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibc9dabbaf92 [] [] }} ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Namespace="calico-system" Pod="whisker-8496b6457d-dq9gb" WorkloadEndpoint="localhost-k8s-whisker--8496b6457d--dq9gb-" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.474 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Namespace="calico-system" Pod="whisker-8496b6457d-dq9gb" WorkloadEndpoint="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.517 [INFO][4395] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" HandleID="k8s-pod-network.c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Workload="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.518 [INFO][4395] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" HandleID="k8s-pod-network.c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Workload="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8496b6457d-dq9gb", "timestamp":"2025-09-12 17:33:11.517717654 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.518 [INFO][4395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.518 [INFO][4395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.518 [INFO][4395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.528 [INFO][4395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" host="localhost" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.541 [INFO][4395] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.632 [INFO][4395] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.641 [INFO][4395] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.645 [INFO][4395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.646 [INFO][4395] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" host="localhost" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.649 [INFO][4395] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871 Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.657 [INFO][4395] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" host="localhost" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.665 [INFO][4395] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" host="localhost" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.665 [INFO][4395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" host="localhost" Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.666 [INFO][4395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:11.719325 containerd[1459]: 2025-09-12 17:33:11.666 [INFO][4395] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" HandleID="k8s-pod-network.c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Workload="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" Sep 12 17:33:11.720047 containerd[1459]: 2025-09-12 17:33:11.671 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Namespace="calico-system" Pod="whisker-8496b6457d-dq9gb" WorkloadEndpoint="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8496b6457d--dq9gb-eth0", GenerateName:"whisker-8496b6457d-", Namespace:"calico-system", SelfLink:"", UID:"30a14ef5-5e10-492f-bfc1-0edd33e989e9", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8496b6457d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8496b6457d-dq9gb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc9dabbaf92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:11.720047 containerd[1459]: 2025-09-12 17:33:11.671 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Namespace="calico-system" Pod="whisker-8496b6457d-dq9gb" WorkloadEndpoint="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" Sep 12 17:33:11.720047 containerd[1459]: 2025-09-12 17:33:11.671 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc9dabbaf92 ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Namespace="calico-system" Pod="whisker-8496b6457d-dq9gb" WorkloadEndpoint="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" Sep 12 17:33:11.720047 containerd[1459]: 2025-09-12 17:33:11.681 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Namespace="calico-system" Pod="whisker-8496b6457d-dq9gb" WorkloadEndpoint="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" Sep 12 17:33:11.720047 containerd[1459]: 2025-09-12 17:33:11.684 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Namespace="calico-system" Pod="whisker-8496b6457d-dq9gb" WorkloadEndpoint="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8496b6457d--dq9gb-eth0", GenerateName:"whisker-8496b6457d-", Namespace:"calico-system", SelfLink:"", UID:"30a14ef5-5e10-492f-bfc1-0edd33e989e9", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 33, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8496b6457d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871", Pod:"whisker-8496b6457d-dq9gb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc9dabbaf92", MAC:"42:0e:88:63:cc:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:11.720047 containerd[1459]: 2025-09-12 17:33:11.706 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871" Namespace="calico-system" Pod="whisker-8496b6457d-dq9gb" WorkloadEndpoint="localhost-k8s-whisker--8496b6457d--dq9gb-eth0" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.654 [INFO][4419] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.654 [INFO][4419] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" iface="eth0" netns="/var/run/netns/cni-cc148e85-61aa-0ccf-350c-e1cb278392d7" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.656 [INFO][4419] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" iface="eth0" netns="/var/run/netns/cni-cc148e85-61aa-0ccf-350c-e1cb278392d7" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.656 [INFO][4419] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" iface="eth0" netns="/var/run/netns/cni-cc148e85-61aa-0ccf-350c-e1cb278392d7" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.656 [INFO][4419] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.656 [INFO][4419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.709 [INFO][4448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" HandleID="k8s-pod-network.ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.710 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.710 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.716 [WARNING][4448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" HandleID="k8s-pod-network.ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.716 [INFO][4448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" HandleID="k8s-pod-network.ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.717 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:11.726886 containerd[1459]: 2025-09-12 17:33:11.720 [INFO][4419] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:11.726886 containerd[1459]: time="2025-09-12T17:33:11.725953168Z" level=info msg="TearDown network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\" successfully" Sep 12 17:33:11.726886 containerd[1459]: time="2025-09-12T17:33:11.725981533Z" level=info msg="StopPodSandbox for \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\" returns successfully" Sep 12 17:33:11.728248 containerd[1459]: time="2025-09-12T17:33:11.727829167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778dc6895-shqls,Uid:2bc8466e-fde1-48f6-8cd4-b09a878f3fd7,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:33:11.728998 systemd[1]: run-netns-cni\x2dcc148e85\x2d61aa\x2d0ccf\x2d350c\x2de1cb278392d7.mount: Deactivated successfully. Sep 12 17:33:11.757606 containerd[1459]: time="2025-09-12T17:33:11.757222532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:11.757606 containerd[1459]: time="2025-09-12T17:33:11.757309027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:11.757606 containerd[1459]: time="2025-09-12T17:33:11.757329696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:11.757606 containerd[1459]: time="2025-09-12T17:33:11.757479822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:11.799239 systemd[1]: Started cri-containerd-c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871.scope - libcontainer container c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871. Sep 12 17:33:11.816875 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:33:11.847669 containerd[1459]: time="2025-09-12T17:33:11.847532788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8496b6457d-dq9gb,Uid:30a14ef5-5e10-492f-bfc1-0edd33e989e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871\"" Sep 12 17:33:12.096048 systemd-networkd[1401]: cali476f6c75536: Gained IPv6LL Sep 12 17:33:12.161469 systemd-networkd[1401]: calid1638fc3401: Link UP Sep 12 17:33:12.162676 systemd-networkd[1401]: calid1638fc3401: Gained carrier Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.834 [INFO][4507] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0 calico-apiserver-6778dc6895- calico-apiserver 2bc8466e-fde1-48f6-8cd4-b09a878f3fd7 1039 0 2025-09-12 17:32:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6778dc6895 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6778dc6895-shqls eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid1638fc3401 [] [] }} ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-shqls" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--shqls-" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.834 [INFO][4507] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-shqls" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.872 [INFO][4536] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" HandleID="k8s-pod-network.46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.873 [INFO][4536] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" HandleID="k8s-pod-network.46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6778dc6895-shqls", "timestamp":"2025-09-12 17:33:11.872182334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.873 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.873 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.873 [INFO][4536] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.883 [INFO][4536] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" host="localhost" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.887 [INFO][4536] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.933 [INFO][4536] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.935 [INFO][4536] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.938 [INFO][4536] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.938 [INFO][4536] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" host="localhost" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:11.939 [INFO][4536] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891 Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:12.146 [INFO][4536] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" host="localhost" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:12.153 [INFO][4536] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" host="localhost" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:12.153 [INFO][4536] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" host="localhost" Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:12.153 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:12.183290 containerd[1459]: 2025-09-12 17:33:12.153 [INFO][4536] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" HandleID="k8s-pod-network.46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:12.184077 containerd[1459]: 2025-09-12 17:33:12.157 [INFO][4507] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-shqls" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0", GenerateName:"calico-apiserver-6778dc6895-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc8466e-fde1-48f6-8cd4-b09a878f3fd7", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778dc6895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6778dc6895-shqls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1638fc3401", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:12.184077 containerd[1459]: 2025-09-12 17:33:12.157 [INFO][4507] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-shqls" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:12.184077 containerd[1459]: 2025-09-12 17:33:12.157 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1638fc3401 ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-shqls" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:12.184077 containerd[1459]: 2025-09-12 17:33:12.162 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-shqls" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:12.184077 containerd[1459]: 2025-09-12 17:33:12.163 [INFO][4507] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-shqls" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0", GenerateName:"calico-apiserver-6778dc6895-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc8466e-fde1-48f6-8cd4-b09a878f3fd7", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778dc6895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891", Pod:"calico-apiserver-6778dc6895-shqls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1638fc3401", MAC:"0e:ec:50:58:af:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:12.184077 containerd[1459]: 2025-09-12 17:33:12.177 [INFO][4507] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-shqls" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:12.210496 containerd[1459]: time="2025-09-12T17:33:12.210185538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:12.210496 containerd[1459]: time="2025-09-12T17:33:12.210256544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:12.210496 containerd[1459]: time="2025-09-12T17:33:12.210270400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:12.210496 containerd[1459]: time="2025-09-12T17:33:12.210379128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:12.236026 systemd[1]: Started cri-containerd-46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891.scope - libcontainer container 46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891. Sep 12 17:33:12.253105 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:33:12.287897 containerd[1459]: time="2025-09-12T17:33:12.287815977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778dc6895-shqls,Uid:2bc8466e-fde1-48f6-8cd4-b09a878f3fd7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891\"" Sep 12 17:33:12.524905 kubelet[2504]: E0912 17:33:12.524748 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:12.554108 containerd[1459]: time="2025-09-12T17:33:12.554064213Z" level=info msg="StopPodSandbox for \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\"" Sep 12 17:33:12.800047 systemd-networkd[1401]: calid1de226b9e3: Gained IPv6LL Sep 12 17:33:12.864999 systemd-networkd[1401]: calibc9dabbaf92: Gained IPv6LL Sep 12 17:33:12.928236 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:12.619 [INFO][4651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:12.621 [INFO][4651] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" iface="eth0" netns="/var/run/netns/cni-49d3632d-afff-97fa-f1ec-3b9bd3eb4c86" Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:12.621 [INFO][4651] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" iface="eth0" netns="/var/run/netns/cni-49d3632d-afff-97fa-f1ec-3b9bd3eb4c86" Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:12.622 [INFO][4651] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" iface="eth0" netns="/var/run/netns/cni-49d3632d-afff-97fa-f1ec-3b9bd3eb4c86" Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:12.622 [INFO][4651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:12.622 [INFO][4651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:13.182 [INFO][4660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" HandleID="k8s-pod-network.8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:13.182 [INFO][4660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:13.182 [INFO][4660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:13.187 [WARNING][4660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" HandleID="k8s-pod-network.8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:13.187 [INFO][4660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" HandleID="k8s-pod-network.8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:13.189 [INFO][4660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:13.198895 containerd[1459]: 2025-09-12 17:33:13.193 [INFO][4651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:13.202056 containerd[1459]: time="2025-09-12T17:33:13.202009091Z" level=info msg="TearDown network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\" successfully" Sep 12 17:33:13.202056 containerd[1459]: time="2025-09-12T17:33:13.202049167Z" level=info msg="StopPodSandbox for \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\" returns successfully" Sep 12 17:33:13.202743 kubelet[2504]: E0912 17:33:13.202475 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:13.203387 systemd[1]: run-netns-cni\x2d49d3632d\x2dafff\x2d97fa\x2df1ec\x2d3b9bd3eb4c86.mount: Deactivated successfully. Sep 12 17:33:13.203768 containerd[1459]: time="2025-09-12T17:33:13.203650982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hlnxj,Uid:c771cb80-b69d-4703-831e-b3564032fbcd,Namespace:kube-system,Attempt:1,}" Sep 12 17:33:13.368148 systemd-networkd[1401]: cali05ae1163743: Link UP Sep 12 17:33:13.370192 systemd-networkd[1401]: cali05ae1163743: Gained carrier Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.287 [INFO][4673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0 coredns-668d6bf9bc- kube-system c771cb80-b69d-4703-831e-b3564032fbcd 1063 0 2025-09-12 17:32:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-hlnxj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali05ae1163743 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlnxj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlnxj-" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.287 [INFO][4673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlnxj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.317 [INFO][4689] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" HandleID="k8s-pod-network.40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.318 [INFO][4689] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" HandleID="k8s-pod-network.40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-hlnxj", "timestamp":"2025-09-12 17:33:13.317829421 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.318 [INFO][4689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.318 [INFO][4689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.318 [INFO][4689] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.326 [INFO][4689] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" host="localhost" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.331 [INFO][4689] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.336 [INFO][4689] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.338 [INFO][4689] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.341 [INFO][4689] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.341 [INFO][4689] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" host="localhost" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.343 [INFO][4689] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.349 [INFO][4689] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" host="localhost" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.357 [INFO][4689] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" host="localhost" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.357 [INFO][4689] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" host="localhost" Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.357 [INFO][4689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:13.394945 containerd[1459]: 2025-09-12 17:33:13.358 [INFO][4689] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" HandleID="k8s-pod-network.40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.395765 containerd[1459]: 2025-09-12 17:33:13.361 [INFO][4673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlnxj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c771cb80-b69d-4703-831e-b3564032fbcd", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-hlnxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05ae1163743", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:13.395765 containerd[1459]: 2025-09-12 17:33:13.361 [INFO][4673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlnxj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.395765 containerd[1459]: 2025-09-12 17:33:13.361 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05ae1163743 ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlnxj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.395765 containerd[1459]: 2025-09-12 17:33:13.371 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlnxj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.395765 containerd[1459]: 2025-09-12 17:33:13.372 [INFO][4673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlnxj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c771cb80-b69d-4703-831e-b3564032fbcd", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a", Pod:"coredns-668d6bf9bc-hlnxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05ae1163743", MAC:"e6:9e:15:dd:d2:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:13.395765 containerd[1459]: 2025-09-12 17:33:13.389 [INFO][4673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a" Namespace="kube-system" Pod="coredns-668d6bf9bc-hlnxj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:13.477335 containerd[1459]: time="2025-09-12T17:33:13.476988229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:13.477335 containerd[1459]: time="2025-09-12T17:33:13.477064465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:13.477335 containerd[1459]: time="2025-09-12T17:33:13.477076798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:13.477500 containerd[1459]: time="2025-09-12T17:33:13.477181808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:13.505156 systemd[1]: Started cri-containerd-40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a.scope - libcontainer container 40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a. Sep 12 17:33:13.519895 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:33:13.529457 kubelet[2504]: E0912 17:33:13.529417 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:13.549782 containerd[1459]: time="2025-09-12T17:33:13.549731138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hlnxj,Uid:c771cb80-b69d-4703-831e-b3564032fbcd,Namespace:kube-system,Attempt:1,} returns sandbox id \"40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a\"" Sep 12 17:33:13.550526 kubelet[2504]: E0912 17:33:13.550498 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:13.556289 containerd[1459]: time="2025-09-12T17:33:13.556122063Z" level=info msg="StopPodSandbox for \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\"" Sep 12 17:33:13.558937 containerd[1459]: time="2025-09-12T17:33:13.558897485Z" level=info msg="CreateContainer within sandbox \"40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:33:13.591182 containerd[1459]: time="2025-09-12T17:33:13.591096644Z" level=info msg="CreateContainer within sandbox \"40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"791fbc0fddaadba402d66d37368ed3fe2b1fcf3f7fe3ec44f27126e79ca4b0ff\"" Sep 12 17:33:13.592032 containerd[1459]: time="2025-09-12T17:33:13.591896269Z" level=info msg="StartContainer for \"791fbc0fddaadba402d66d37368ed3fe2b1fcf3f7fe3ec44f27126e79ca4b0ff\"" Sep 12 17:33:13.652979 systemd[1]: Started cri-containerd-791fbc0fddaadba402d66d37368ed3fe2b1fcf3f7fe3ec44f27126e79ca4b0ff.scope - libcontainer container 791fbc0fddaadba402d66d37368ed3fe2b1fcf3f7fe3ec44f27126e79ca4b0ff. Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.620 [INFO][4763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.620 [INFO][4763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" iface="eth0" netns="/var/run/netns/cni-f874e2cd-b075-432f-1857-0680544e2be9" Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.621 [INFO][4763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" iface="eth0" netns="/var/run/netns/cni-f874e2cd-b075-432f-1857-0680544e2be9" Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.621 [INFO][4763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" iface="eth0" netns="/var/run/netns/cni-f874e2cd-b075-432f-1857-0680544e2be9" Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.621 [INFO][4763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.621 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.653 [INFO][4788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" HandleID="k8s-pod-network.f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.653 [INFO][4788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.653 [INFO][4788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.661 [WARNING][4788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" HandleID="k8s-pod-network.f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.661 [INFO][4788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" HandleID="k8s-pod-network.f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.662 [INFO][4788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:13.675184 containerd[1459]: 2025-09-12 17:33:13.666 [INFO][4763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:13.676030 containerd[1459]: time="2025-09-12T17:33:13.675984570Z" level=info msg="TearDown network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\" successfully" Sep 12 17:33:13.677441 containerd[1459]: time="2025-09-12T17:33:13.676207064Z" level=info msg="StopPodSandbox for \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\" returns successfully" Sep 12 17:33:13.679442 containerd[1459]: time="2025-09-12T17:33:13.679391986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4vh8,Uid:948ee8ec-ca22-47e2-8cb0-925053d685b4,Namespace:calico-system,Attempt:1,}" Sep 12 17:33:13.703316 containerd[1459]: time="2025-09-12T17:33:13.703257637Z" level=info msg="StartContainer for \"791fbc0fddaadba402d66d37368ed3fe2b1fcf3f7fe3ec44f27126e79ca4b0ff\" returns successfully" Sep 12 17:33:13.859004 systemd-networkd[1401]: cali42dcfbe157b: Link UP Sep 12 17:33:13.860293 systemd-networkd[1401]: cali42dcfbe157b: Gained carrier Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.773 [INFO][4819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--x4vh8-eth0 csi-node-driver- calico-system 948ee8ec-ca22-47e2-8cb0-925053d685b4 1078 0 2025-09-12 17:32:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-x4vh8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali42dcfbe157b [] [] }} ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Namespace="calico-system" Pod="csi-node-driver-x4vh8" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vh8-" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.774 [INFO][4819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Namespace="calico-system" Pod="csi-node-driver-x4vh8" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.812 [INFO][4833] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" HandleID="k8s-pod-network.74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.812 [INFO][4833] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" HandleID="k8s-pod-network.74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139d90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-x4vh8", "timestamp":"2025-09-12 17:33:13.812430172 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.812 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.813 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.813 [INFO][4833] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.821 [INFO][4833] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" host="localhost" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.827 [INFO][4833] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.831 [INFO][4833] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.833 [INFO][4833] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.835 [INFO][4833] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.835 [INFO][4833] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" host="localhost" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.836 [INFO][4833] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.842 [INFO][4833] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" host="localhost" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.852 [INFO][4833] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" host="localhost" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.852 [INFO][4833] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" host="localhost" Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.852 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:13.882559 containerd[1459]: 2025-09-12 17:33:13.852 [INFO][4833] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" HandleID="k8s-pod-network.74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.884163 containerd[1459]: 2025-09-12 17:33:13.856 [INFO][4819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Namespace="calico-system" Pod="csi-node-driver-x4vh8" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4vh8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"948ee8ec-ca22-47e2-8cb0-925053d685b4", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-x4vh8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali42dcfbe157b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:13.884163 containerd[1459]: 2025-09-12 17:33:13.856 [INFO][4819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Namespace="calico-system" Pod="csi-node-driver-x4vh8" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.884163 containerd[1459]: 2025-09-12 17:33:13.856 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42dcfbe157b ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Namespace="calico-system" Pod="csi-node-driver-x4vh8" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.884163 containerd[1459]: 2025-09-12 17:33:13.859 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Namespace="calico-system" Pod="csi-node-driver-x4vh8" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.884163 containerd[1459]: 2025-09-12 17:33:13.860 [INFO][4819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Namespace="calico-system" Pod="csi-node-driver-x4vh8" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4vh8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"948ee8ec-ca22-47e2-8cb0-925053d685b4", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d", Pod:"csi-node-driver-x4vh8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali42dcfbe157b", MAC:"3e:6f:74:58:0a:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:13.884163 containerd[1459]: 2025-09-12 17:33:13.877 [INFO][4819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d" Namespace="calico-system" Pod="csi-node-driver-x4vh8" WorkloadEndpoint="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:13.991941 containerd[1459]: time="2025-09-12T17:33:13.991262259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:13.991941 containerd[1459]: time="2025-09-12T17:33:13.991334076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:13.991941 containerd[1459]: time="2025-09-12T17:33:13.991349235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:13.991941 containerd[1459]: time="2025-09-12T17:33:13.991459685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:14.022061 systemd[1]: Started cri-containerd-74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d.scope - libcontainer container 74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d. Sep 12 17:33:14.037299 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:33:14.050321 containerd[1459]: time="2025-09-12T17:33:14.050261427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x4vh8,Uid:948ee8ec-ca22-47e2-8cb0-925053d685b4,Namespace:calico-system,Attempt:1,} returns sandbox id \"74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d\"" Sep 12 17:33:14.081011 systemd-networkd[1401]: calid1638fc3401: Gained IPv6LL Sep 12 17:33:14.319102 containerd[1459]: time="2025-09-12T17:33:14.319006741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:14.320227 containerd[1459]: time="2025-09-12T17:33:14.320165479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:33:14.322080 containerd[1459]: time="2025-09-12T17:33:14.322042057Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:14.324792 containerd[1459]: time="2025-09-12T17:33:14.324731102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:14.325661 containerd[1459]: time="2025-09-12T17:33:14.325596542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.887372811s" Sep 12 17:33:14.325661 containerd[1459]: time="2025-09-12T17:33:14.325652678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:33:14.326797 containerd[1459]: time="2025-09-12T17:33:14.326762424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:33:14.335592 containerd[1459]: time="2025-09-12T17:33:14.335555405Z" level=info msg="CreateContainer within sandbox \"3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:33:14.354647 containerd[1459]: time="2025-09-12T17:33:14.354588767Z" level=info msg="CreateContainer within sandbox \"3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b0f6c5b23397d102f3d2f6bbea60b9ec5592229049ddad52f65eb641b6da9da0\"" Sep 12 17:33:14.355323 containerd[1459]: time="2025-09-12T17:33:14.355294572Z" level=info msg="StartContainer for \"b0f6c5b23397d102f3d2f6bbea60b9ec5592229049ddad52f65eb641b6da9da0\"" Sep 12 17:33:14.402215 systemd[1]: Started cri-containerd-b0f6c5b23397d102f3d2f6bbea60b9ec5592229049ddad52f65eb641b6da9da0.scope - libcontainer container b0f6c5b23397d102f3d2f6bbea60b9ec5592229049ddad52f65eb641b6da9da0. Sep 12 17:33:14.488584 systemd[1]: run-netns-cni\x2df874e2cd\x2db075\x2d432f\x2d1857\x2d0680544e2be9.mount: Deactivated successfully. Sep 12 17:33:14.516283 containerd[1459]: time="2025-09-12T17:33:14.516200862Z" level=info msg="StartContainer for \"b0f6c5b23397d102f3d2f6bbea60b9ec5592229049ddad52f65eb641b6da9da0\" returns successfully" Sep 12 17:33:14.555840 containerd[1459]: time="2025-09-12T17:33:14.552900830Z" level=info msg="StopPodSandbox for \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\"" Sep 12 17:33:14.564747 kubelet[2504]: E0912 17:33:14.564689 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:14.578003 kubelet[2504]: I0912 17:33:14.574895 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8b99944df-4v9pg" podStartSLOduration=26.685547971 podStartE2EDuration="30.57487639s" podCreationTimestamp="2025-09-12 17:32:44 +0000 UTC" firstStartedPulling="2025-09-12 17:33:10.437186453 +0000 UTC m=+46.979140628" lastFinishedPulling="2025-09-12 17:33:14.326514862 +0000 UTC m=+50.868469047" observedRunningTime="2025-09-12 17:33:14.574483641 +0000 UTC m=+51.116437836" watchObservedRunningTime="2025-09-12 17:33:14.57487639 +0000 UTC m=+51.116830575" Sep 12 17:33:14.599439 kubelet[2504]: I0912 17:33:14.599345 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hlnxj" podStartSLOduration=44.599318652 podStartE2EDuration="44.599318652s" podCreationTimestamp="2025-09-12 17:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:14.595900266 +0000 UTC m=+51.137854471" watchObservedRunningTime="2025-09-12 17:33:14.599318652 +0000 UTC m=+51.141272837" Sep 12 17:33:14.630996 systemd[1]: run-containerd-runc-k8s.io-b0f6c5b23397d102f3d2f6bbea60b9ec5592229049ddad52f65eb641b6da9da0-runc.jS77Fd.mount: Deactivated successfully. Sep 12 17:33:15.424078 systemd-networkd[1401]: cali05ae1163743: Gained IPv6LL Sep 12 17:33:15.591943 kubelet[2504]: E0912 17:33:15.591356 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.586 [INFO][4956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.586 [INFO][4956] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" iface="eth0" netns="/var/run/netns/cni-1a000d59-450f-295b-1dff-7b6699982a54" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.587 [INFO][4956] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" iface="eth0" netns="/var/run/netns/cni-1a000d59-450f-295b-1dff-7b6699982a54" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.588 [INFO][4956] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" iface="eth0" netns="/var/run/netns/cni-1a000d59-450f-295b-1dff-7b6699982a54" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.588 [INFO][4956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.588 [INFO][4956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.619 [INFO][4985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" HandleID="k8s-pod-network.e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.619 [INFO][4985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.620 [INFO][4985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.628 [WARNING][4985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" HandleID="k8s-pod-network.e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.629 [INFO][4985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" HandleID="k8s-pod-network.e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.630 [INFO][4985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:15.638308 containerd[1459]: 2025-09-12 17:33:15.634 [INFO][4956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:15.641815 containerd[1459]: time="2025-09-12T17:33:15.641041175Z" level=info msg="TearDown network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\" successfully" Sep 12 17:33:15.641815 containerd[1459]: time="2025-09-12T17:33:15.641082503Z" level=info msg="StopPodSandbox for \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\" returns successfully" Sep 12 17:33:15.642274 containerd[1459]: time="2025-09-12T17:33:15.642244157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778dc6895-fdvxx,Uid:9aeaba7e-9c59-470f-82df-9b9fbf5dd0db,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:33:15.644779 systemd[1]: run-netns-cni\x2d1a000d59\x2d450f\x2d295b\x2d1dff\x2d7b6699982a54.mount: Deactivated successfully. Sep 12 17:33:15.744378 systemd-networkd[1401]: cali42dcfbe157b: Gained IPv6LL Sep 12 17:33:15.758409 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:47862.service - OpenSSH per-connection server daemon (10.0.0.1:47862). Sep 12 17:33:15.789682 systemd-networkd[1401]: cali540bb2bc8a7: Link UP Sep 12 17:33:15.790066 systemd-networkd[1401]: cali540bb2bc8a7: Gained carrier Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.705 [INFO][4992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0 calico-apiserver-6778dc6895- calico-apiserver 9aeaba7e-9c59-470f-82df-9b9fbf5dd0db 1107 0 2025-09-12 17:32:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6778dc6895 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6778dc6895-fdvxx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali540bb2bc8a7 [] [] }} ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-fdvxx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.705 [INFO][4992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-fdvxx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.734 [INFO][5006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" HandleID="k8s-pod-network.b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.734 [INFO][5006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" HandleID="k8s-pod-network.b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6778dc6895-fdvxx", "timestamp":"2025-09-12 17:33:15.733997696 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.734 [INFO][5006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.734 [INFO][5006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.734 [INFO][5006] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.742 [INFO][5006] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" host="localhost" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.751 [INFO][5006] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.763 [INFO][5006] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.764 [INFO][5006] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.768 [INFO][5006] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.768 [INFO][5006] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" host="localhost" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.770 [INFO][5006] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7 Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.775 [INFO][5006] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" host="localhost" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.781 [INFO][5006] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" host="localhost" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.781 [INFO][5006] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" host="localhost" Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.781 [INFO][5006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:15.803904 containerd[1459]: 2025-09-12 17:33:15.781 [INFO][5006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" HandleID="k8s-pod-network.b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.804595 containerd[1459]: 2025-09-12 17:33:15.785 [INFO][4992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-fdvxx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0", GenerateName:"calico-apiserver-6778dc6895-", Namespace:"calico-apiserver", SelfLink:"", UID:"9aeaba7e-9c59-470f-82df-9b9fbf5dd0db", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778dc6895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6778dc6895-fdvxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali540bb2bc8a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:15.804595 containerd[1459]: 2025-09-12 17:33:15.785 [INFO][4992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-fdvxx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.804595 containerd[1459]: 2025-09-12 17:33:15.785 [INFO][4992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali540bb2bc8a7 ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-fdvxx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.804595 containerd[1459]: 2025-09-12 17:33:15.788 [INFO][4992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-fdvxx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.804595 containerd[1459]: 2025-09-12 17:33:15.788 [INFO][4992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-fdvxx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0", GenerateName:"calico-apiserver-6778dc6895-", Namespace:"calico-apiserver", SelfLink:"", UID:"9aeaba7e-9c59-470f-82df-9b9fbf5dd0db", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778dc6895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7", Pod:"calico-apiserver-6778dc6895-fdvxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali540bb2bc8a7", MAC:"92:72:80:6a:db:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:15.804595 containerd[1459]: 2025-09-12 17:33:15.798 [INFO][4992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7" Namespace="calico-apiserver" Pod="calico-apiserver-6778dc6895-fdvxx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:15.818021 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 47862 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:15.820621 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:15.830183 containerd[1459]: time="2025-09-12T17:33:15.829237951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:33:15.830183 containerd[1459]: time="2025-09-12T17:33:15.829305549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:33:15.830183 containerd[1459]: time="2025-09-12T17:33:15.829319756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:15.830183 containerd[1459]: time="2025-09-12T17:33:15.829493478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:33:15.830133 systemd-logind[1442]: New session 10 of user core. Sep 12 17:33:15.835051 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:33:15.861013 systemd[1]: Started cri-containerd-b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7.scope - libcontainer container b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7. Sep 12 17:33:15.874419 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:33:15.904161 containerd[1459]: time="2025-09-12T17:33:15.904109985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6778dc6895-fdvxx,Uid:9aeaba7e-9c59-470f-82df-9b9fbf5dd0db,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7\"" Sep 12 17:33:15.988797 sshd[5014]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:15.993349 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:47862.service: Deactivated successfully. Sep 12 17:33:15.995741 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:33:15.996679 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:33:15.997586 systemd-logind[1442]: Removed session 10. Sep 12 17:33:16.603653 kubelet[2504]: E0912 17:33:16.603605 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:16.877057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2300792948.mount: Deactivated successfully. Sep 12 17:33:17.216166 systemd-networkd[1401]: cali540bb2bc8a7: Gained IPv6LL Sep 12 17:33:17.728172 containerd[1459]: time="2025-09-12T17:33:17.728120017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:17.729092 containerd[1459]: time="2025-09-12T17:33:17.729003981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:33:17.730264 containerd[1459]: time="2025-09-12T17:33:17.730233765Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:17.732780 containerd[1459]: time="2025-09-12T17:33:17.732739659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:17.733726 containerd[1459]: time="2025-09-12T17:33:17.733682775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.40688293s" Sep 12 17:33:17.733797 containerd[1459]: time="2025-09-12T17:33:17.733719525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:33:17.735196 containerd[1459]: time="2025-09-12T17:33:17.735156784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:33:17.736775 containerd[1459]: time="2025-09-12T17:33:17.736316894Z" level=info msg="CreateContainer within sandbox \"1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:33:17.761286 containerd[1459]: time="2025-09-12T17:33:17.761147672Z" level=info msg="CreateContainer within sandbox \"1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bfd4f92db58fca3eb742d848c81a06df92ab1f4b37f84279a67ee9ba23939a1a\"" Sep 12 17:33:17.762212 containerd[1459]: time="2025-09-12T17:33:17.762164259Z" level=info msg="StartContainer for \"bfd4f92db58fca3eb742d848c81a06df92ab1f4b37f84279a67ee9ba23939a1a\"" Sep 12 17:33:17.796928 systemd[1]: run-containerd-runc-k8s.io-bfd4f92db58fca3eb742d848c81a06df92ab1f4b37f84279a67ee9ba23939a1a-runc.epmZ44.mount: Deactivated successfully. Sep 12 17:33:17.806738 systemd[1]: Started cri-containerd-bfd4f92db58fca3eb742d848c81a06df92ab1f4b37f84279a67ee9ba23939a1a.scope - libcontainer container bfd4f92db58fca3eb742d848c81a06df92ab1f4b37f84279a67ee9ba23939a1a. Sep 12 17:33:17.865614 containerd[1459]: time="2025-09-12T17:33:17.865567836Z" level=info msg="StartContainer for \"bfd4f92db58fca3eb742d848c81a06df92ab1f4b37f84279a67ee9ba23939a1a\" returns successfully" Sep 12 17:33:18.675340 kubelet[2504]: I0912 17:33:18.674549 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-95vvq" podStartSLOduration=28.292820014 podStartE2EDuration="34.674532417s" podCreationTimestamp="2025-09-12 17:32:44 +0000 UTC" firstStartedPulling="2025-09-12 17:33:11.353163686 +0000 UTC m=+47.895117871" lastFinishedPulling="2025-09-12 17:33:17.734876089 +0000 UTC m=+54.276830274" observedRunningTime="2025-09-12 17:33:18.673969955 +0000 UTC m=+55.215924151" watchObservedRunningTime="2025-09-12 17:33:18.674532417 +0000 UTC m=+55.216486602" Sep 12 17:33:21.001698 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:48518.service - OpenSSH per-connection server daemon (10.0.0.1:48518). Sep 12 17:33:21.175774 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 48518 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:21.177644 sshd[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:21.182447 systemd-logind[1442]: New session 11 of user core. Sep 12 17:33:21.193009 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:33:21.227467 containerd[1459]: time="2025-09-12T17:33:21.227385315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:21.228450 containerd[1459]: time="2025-09-12T17:33:21.228365571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:33:21.229806 containerd[1459]: time="2025-09-12T17:33:21.229774414Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:21.232558 containerd[1459]: time="2025-09-12T17:33:21.232502909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:21.233345 containerd[1459]: time="2025-09-12T17:33:21.233316288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.498112986s" Sep 12 17:33:21.233406 containerd[1459]: time="2025-09-12T17:33:21.233348018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:33:21.234782 containerd[1459]: time="2025-09-12T17:33:21.234731243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:33:21.236205 containerd[1459]: time="2025-09-12T17:33:21.236068309Z" level=info msg="CreateContainer within sandbox \"c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:33:21.254023 containerd[1459]: time="2025-09-12T17:33:21.253869776Z" level=info msg="CreateContainer within sandbox \"c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"248f084b2fc069a107ab3f0541c3a239d3bc21aa52d05c15d39e82425d619061\"" Sep 12 17:33:21.255306 containerd[1459]: time="2025-09-12T17:33:21.255190491Z" level=info msg="StartContainer for \"248f084b2fc069a107ab3f0541c3a239d3bc21aa52d05c15d39e82425d619061\"" Sep 12 17:33:21.292009 systemd[1]: Started cri-containerd-248f084b2fc069a107ab3f0541c3a239d3bc21aa52d05c15d39e82425d619061.scope - libcontainer container 248f084b2fc069a107ab3f0541c3a239d3bc21aa52d05c15d39e82425d619061. Sep 12 17:33:21.340653 containerd[1459]: time="2025-09-12T17:33:21.340360601Z" level=info msg="StartContainer for \"248f084b2fc069a107ab3f0541c3a239d3bc21aa52d05c15d39e82425d619061\" returns successfully" Sep 12 17:33:21.342592 sshd[5186]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:21.351496 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:48518.service: Deactivated successfully. Sep 12 17:33:21.354829 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:33:21.357210 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:33:21.364731 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:48526.service - OpenSSH per-connection server daemon (10.0.0.1:48526). Sep 12 17:33:21.366153 systemd-logind[1442]: Removed session 11. Sep 12 17:33:21.398702 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 48526 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:21.400636 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:21.406567 systemd-logind[1442]: New session 12 of user core. Sep 12 17:33:21.417001 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:33:21.569915 sshd[5235]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:21.583808 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:48526.service: Deactivated successfully. Sep 12 17:33:21.587363 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:33:21.590534 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:33:21.603455 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:48542.service - OpenSSH per-connection server daemon (10.0.0.1:48542). Sep 12 17:33:21.604785 systemd-logind[1442]: Removed session 12. Sep 12 17:33:21.640318 sshd[5252]: Accepted publickey for core from 10.0.0.1 port 48542 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:21.642078 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:21.646358 systemd-logind[1442]: New session 13 of user core. Sep 12 17:33:21.655993 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:33:21.776165 sshd[5252]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:21.780712 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:48542.service: Deactivated successfully. Sep 12 17:33:21.782841 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:33:21.783607 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:33:21.784532 systemd-logind[1442]: Removed session 13. Sep 12 17:33:23.532786 containerd[1459]: time="2025-09-12T17:33:23.532735001Z" level=info msg="StopPodSandbox for \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\"" Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.574 [WARNING][5281] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4vh8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"948ee8ec-ca22-47e2-8cb0-925053d685b4", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d", Pod:"csi-node-driver-x4vh8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali42dcfbe157b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.575 [INFO][5281] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.575 [INFO][5281] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" iface="eth0" netns="" Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.575 [INFO][5281] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.575 [INFO][5281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.600 [INFO][5293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" HandleID="k8s-pod-network.f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.600 [INFO][5293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.600 [INFO][5293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.605 [WARNING][5293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" HandleID="k8s-pod-network.f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.605 [INFO][5293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" HandleID="k8s-pod-network.f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.607 [INFO][5293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:23.613871 containerd[1459]: 2025-09-12 17:33:23.610 [INFO][5281] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:23.614449 containerd[1459]: time="2025-09-12T17:33:23.613909532Z" level=info msg="TearDown network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\" successfully" Sep 12 17:33:23.614449 containerd[1459]: time="2025-09-12T17:33:23.613938568Z" level=info msg="StopPodSandbox for \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\" returns successfully" Sep 12 17:33:23.622913 containerd[1459]: time="2025-09-12T17:33:23.622835860Z" level=info msg="RemovePodSandbox for \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\"" Sep 12 17:33:23.626678 containerd[1459]: time="2025-09-12T17:33:23.626640642Z" level=info msg="Forcibly stopping sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\"" Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.659 [WARNING][5311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--x4vh8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"948ee8ec-ca22-47e2-8cb0-925053d685b4", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d", Pod:"csi-node-driver-x4vh8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali42dcfbe157b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.660 [INFO][5311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.660 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" iface="eth0" netns="" Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.660 [INFO][5311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.660 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.682 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" HandleID="k8s-pod-network.f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.683 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.683 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.688 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" HandleID="k8s-pod-network.f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.688 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" HandleID="k8s-pod-network.f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Workload="localhost-k8s-csi--node--driver--x4vh8-eth0" Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.689 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:23.695478 containerd[1459]: 2025-09-12 17:33:23.692 [INFO][5311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2" Sep 12 17:33:23.696002 containerd[1459]: time="2025-09-12T17:33:23.695529302Z" level=info msg="TearDown network for sandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\" successfully" Sep 12 17:33:23.702468 containerd[1459]: time="2025-09-12T17:33:23.702413065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:33:23.702634 containerd[1459]: time="2025-09-12T17:33:23.702491725Z" level=info msg="RemovePodSandbox \"f950bfae56937f94ea2a045295a2a7270b0da6008a36d015e1fe4ad2f0c191e2\" returns successfully" Sep 12 17:33:23.703289 containerd[1459]: time="2025-09-12T17:33:23.702989561Z" level=info msg="StopPodSandbox for \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\"" Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.735 [WARNING][5336] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c771cb80-b69d-4703-831e-b3564032fbcd", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a", Pod:"coredns-668d6bf9bc-hlnxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05ae1163743", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.735 [INFO][5336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.735 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" iface="eth0" netns="" Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.735 [INFO][5336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.735 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.754 [INFO][5345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" HandleID="k8s-pod-network.8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.755 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.755 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.760 [WARNING][5345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" HandleID="k8s-pod-network.8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.760 [INFO][5345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" HandleID="k8s-pod-network.8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.761 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:23.767208 containerd[1459]: 2025-09-12 17:33:23.764 [INFO][5336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:23.767616 containerd[1459]: time="2025-09-12T17:33:23.767253501Z" level=info msg="TearDown network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\" successfully" Sep 12 17:33:23.767616 containerd[1459]: time="2025-09-12T17:33:23.767277466Z" level=info msg="StopPodSandbox for \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\" returns successfully" Sep 12 17:33:23.767813 containerd[1459]: time="2025-09-12T17:33:23.767770413Z" level=info msg="RemovePodSandbox for \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\"" Sep 12 17:33:23.767813 containerd[1459]: time="2025-09-12T17:33:23.767809447Z" level=info msg="Forcibly stopping sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\"" Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.800 [WARNING][5361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c771cb80-b69d-4703-831e-b3564032fbcd", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40fc2aa1deb2c1c2ef46cc17f96d2e39cab30a8ba77cacdf0f62f4ce0e17693a", Pod:"coredns-668d6bf9bc-hlnxj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali05ae1163743", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.800 [INFO][5361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.800 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" iface="eth0" netns="" Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.800 [INFO][5361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.801 [INFO][5361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.823 [INFO][5370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" HandleID="k8s-pod-network.8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.823 [INFO][5370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.823 [INFO][5370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.828 [WARNING][5370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" HandleID="k8s-pod-network.8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.828 [INFO][5370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" HandleID="k8s-pod-network.8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Workload="localhost-k8s-coredns--668d6bf9bc--hlnxj-eth0" Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.831 [INFO][5370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:23.836691 containerd[1459]: 2025-09-12 17:33:23.834 [INFO][5361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea" Sep 12 17:33:23.837222 containerd[1459]: time="2025-09-12T17:33:23.837184312Z" level=info msg="TearDown network for sandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\" successfully" Sep 12 17:33:23.841258 containerd[1459]: time="2025-09-12T17:33:23.841232867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:33:23.841324 containerd[1459]: time="2025-09-12T17:33:23.841277121Z" level=info msg="RemovePodSandbox \"8b065518e078e9be9e51e45ddc25d0e56ef9470c43ce46383b0a37d62a9c96ea\" returns successfully" Sep 12 17:33:23.841798 containerd[1459]: time="2025-09-12T17:33:23.841775909Z" level=info msg="StopPodSandbox for \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\"" Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.877 [WARNING][5388] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0", GenerateName:"calico-apiserver-6778dc6895-", Namespace:"calico-apiserver", SelfLink:"", UID:"9aeaba7e-9c59-470f-82df-9b9fbf5dd0db", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778dc6895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7", Pod:"calico-apiserver-6778dc6895-fdvxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali540bb2bc8a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.877 [INFO][5388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.877 [INFO][5388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" iface="eth0" netns="" Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.877 [INFO][5388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.877 [INFO][5388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.897 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" HandleID="k8s-pod-network.e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.897 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.898 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.903 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" HandleID="k8s-pod-network.e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.903 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" HandleID="k8s-pod-network.e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.904 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:23.909779 containerd[1459]: 2025-09-12 17:33:23.906 [INFO][5388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:23.910276 containerd[1459]: time="2025-09-12T17:33:23.909833218Z" level=info msg="TearDown network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\" successfully" Sep 12 17:33:23.910276 containerd[1459]: time="2025-09-12T17:33:23.909887672Z" level=info msg="StopPodSandbox for \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\" returns successfully" Sep 12 17:33:23.910492 containerd[1459]: time="2025-09-12T17:33:23.910461954Z" level=info msg="RemovePodSandbox for \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\"" Sep 12 17:33:23.910570 containerd[1459]: time="2025-09-12T17:33:23.910499836Z" level=info msg="Forcibly stopping sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\"" Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:23.955 [WARNING][5415] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0", GenerateName:"calico-apiserver-6778dc6895-", Namespace:"calico-apiserver", SelfLink:"", UID:"9aeaba7e-9c59-470f-82df-9b9fbf5dd0db", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778dc6895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7", Pod:"calico-apiserver-6778dc6895-fdvxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali540bb2bc8a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:23.955 [INFO][5415] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:23.955 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" iface="eth0" netns="" Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:23.955 [INFO][5415] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:23.955 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:23.990 [INFO][5424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" HandleID="k8s-pod-network.e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:23.990 [INFO][5424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:23.990 [INFO][5424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:24.002 [WARNING][5424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" HandleID="k8s-pod-network.e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:24.002 [INFO][5424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" HandleID="k8s-pod-network.e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Workload="localhost-k8s-calico--apiserver--6778dc6895--fdvxx-eth0" Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:24.007 [INFO][5424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:24.016010 containerd[1459]: 2025-09-12 17:33:24.009 [INFO][5415] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54" Sep 12 17:33:24.016640 containerd[1459]: time="2025-09-12T17:33:24.016057233Z" level=info msg="TearDown network for sandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\" successfully" Sep 12 17:33:24.071233 containerd[1459]: time="2025-09-12T17:33:24.071176831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:33:24.071409 containerd[1459]: time="2025-09-12T17:33:24.071257936Z" level=info msg="RemovePodSandbox \"e86dba88bfaf00e12fa57dd3a1d07d16a95761473842cdbd1aefe5992d808c54\" returns successfully" Sep 12 17:33:24.071762 containerd[1459]: time="2025-09-12T17:33:24.071740123Z" level=info msg="StopPodSandbox for \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\"" Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.113 [WARNING][5441] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" WorkloadEndpoint="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.113 [INFO][5441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.113 [INFO][5441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" iface="eth0" netns="" Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.113 [INFO][5441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.113 [INFO][5441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.141 [INFO][5449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" HandleID="k8s-pod-network.28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Workload="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.141 [INFO][5449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.141 [INFO][5449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.147 [WARNING][5449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" HandleID="k8s-pod-network.28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Workload="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.147 [INFO][5449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" HandleID="k8s-pod-network.28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Workload="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.150 [INFO][5449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:24.158603 containerd[1459]: 2025-09-12 17:33:24.155 [INFO][5441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:24.158603 containerd[1459]: time="2025-09-12T17:33:24.158564513Z" level=info msg="TearDown network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\" successfully" Sep 12 17:33:24.158603 containerd[1459]: time="2025-09-12T17:33:24.158597666Z" level=info msg="StopPodSandbox for \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\" returns successfully" Sep 12 17:33:24.159495 containerd[1459]: time="2025-09-12T17:33:24.159402616Z" level=info msg="RemovePodSandbox for \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\"" Sep 12 17:33:24.159495 containerd[1459]: time="2025-09-12T17:33:24.159450117Z" level=info msg="Forcibly stopping sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\"" Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.199 [WARNING][5466] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" WorkloadEndpoint="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.199 [INFO][5466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.199 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" iface="eth0" netns="" Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.199 [INFO][5466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.199 [INFO][5466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.225 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" HandleID="k8s-pod-network.28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Workload="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.226 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.226 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.232 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" HandleID="k8s-pod-network.28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Workload="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.232 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" HandleID="k8s-pod-network.28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Workload="localhost-k8s-whisker--78cb64f49f--nhnl8-eth0" Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.235 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:24.241642 containerd[1459]: 2025-09-12 17:33:24.238 [INFO][5466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492" Sep 12 17:33:24.242333 containerd[1459]: time="2025-09-12T17:33:24.241670045Z" level=info msg="TearDown network for sandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\" successfully" Sep 12 17:33:24.246365 containerd[1459]: time="2025-09-12T17:33:24.246132196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:33:24.246365 containerd[1459]: time="2025-09-12T17:33:24.246237316Z" level=info msg="RemovePodSandbox \"28cf2043b9448efffe2e0a638bde5c8caf8a7cd2d0587abac72561eec77df492\" returns successfully" Sep 12 17:33:24.246967 containerd[1459]: time="2025-09-12T17:33:24.246939601Z" level=info msg="StopPodSandbox for \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\"" Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.284 [WARNING][5497] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0", GenerateName:"calico-apiserver-6778dc6895-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc8466e-fde1-48f6-8cd4-b09a878f3fd7", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778dc6895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891", Pod:"calico-apiserver-6778dc6895-shqls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1638fc3401", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.284 [INFO][5497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.284 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" iface="eth0" netns="" Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.284 [INFO][5497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.284 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.313 [INFO][5507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" HandleID="k8s-pod-network.ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.313 [INFO][5507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.314 [INFO][5507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.319 [WARNING][5507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" HandleID="k8s-pod-network.ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.319 [INFO][5507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" HandleID="k8s-pod-network.ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.320 [INFO][5507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:24.326509 containerd[1459]: 2025-09-12 17:33:24.323 [INFO][5497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:24.327032 containerd[1459]: time="2025-09-12T17:33:24.326563877Z" level=info msg="TearDown network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\" successfully" Sep 12 17:33:24.327032 containerd[1459]: time="2025-09-12T17:33:24.326593143Z" level=info msg="StopPodSandbox for \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\" returns successfully" Sep 12 17:33:24.327314 containerd[1459]: time="2025-09-12T17:33:24.327281320Z" level=info msg="RemovePodSandbox for \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\"" Sep 12 17:33:24.327314 containerd[1459]: time="2025-09-12T17:33:24.327313151Z" level=info msg="Forcibly stopping sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\"" Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.365 [WARNING][5524] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0", GenerateName:"calico-apiserver-6778dc6895-", Namespace:"calico-apiserver", SelfLink:"", UID:"2bc8466e-fde1-48f6-8cd4-b09a878f3fd7", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6778dc6895", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891", Pod:"calico-apiserver-6778dc6895-shqls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1638fc3401", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.365 [INFO][5524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.365 [INFO][5524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" iface="eth0" netns="" Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.365 [INFO][5524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.365 [INFO][5524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.401 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" HandleID="k8s-pod-network.ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.402 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.402 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.409 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" HandleID="k8s-pod-network.ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.409 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" HandleID="k8s-pod-network.ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Workload="localhost-k8s-calico--apiserver--6778dc6895--shqls-eth0" Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.410 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:24.417982 containerd[1459]: 2025-09-12 17:33:24.413 [INFO][5524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369" Sep 12 17:33:24.417982 containerd[1459]: time="2025-09-12T17:33:24.417245049Z" level=info msg="TearDown network for sandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\" successfully" Sep 12 17:33:24.422996 containerd[1459]: time="2025-09-12T17:33:24.422951264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:33:24.423121 containerd[1459]: time="2025-09-12T17:33:24.423040844Z" level=info msg="RemovePodSandbox \"ee3bc49fd2f3696327740f041ae2f3beb9c0b01271cbf32a8f7ee3e958566369\" returns successfully" Sep 12 17:33:24.423760 containerd[1459]: time="2025-09-12T17:33:24.423730616Z" level=info msg="StopPodSandbox for \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\"" Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.468 [WARNING][5549] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--95vvq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"52776af4-cb40-450c-a1f6-9e31fbaa89d8", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b", Pod:"goldmane-54d579b49d-95vvq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1de226b9e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.469 [INFO][5549] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.469 [INFO][5549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" iface="eth0" netns="" Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.469 [INFO][5549] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.469 [INFO][5549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.496 [INFO][5557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" HandleID="k8s-pod-network.9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.496 [INFO][5557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.496 [INFO][5557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.503 [WARNING][5557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" HandleID="k8s-pod-network.9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.503 [INFO][5557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" HandleID="k8s-pod-network.9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.504 [INFO][5557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:24.511060 containerd[1459]: 2025-09-12 17:33:24.507 [INFO][5549] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:24.511660 containerd[1459]: time="2025-09-12T17:33:24.511089442Z" level=info msg="TearDown network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\" successfully" Sep 12 17:33:24.511660 containerd[1459]: time="2025-09-12T17:33:24.511121824Z" level=info msg="StopPodSandbox for \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\" returns successfully" Sep 12 17:33:24.512078 containerd[1459]: time="2025-09-12T17:33:24.511757602Z" level=info msg="RemovePodSandbox for \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\"" Sep 12 17:33:24.512078 containerd[1459]: time="2025-09-12T17:33:24.511799401Z" level=info msg="Forcibly stopping sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\"" Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.551 [WARNING][5574] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--95vvq-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"52776af4-cb40-450c-a1f6-9e31fbaa89d8", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b5b5c874828fe52020aea294234b09f90f8df7c77e2b558a97039f0311ad06b", Pod:"goldmane-54d579b49d-95vvq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1de226b9e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.552 [INFO][5574] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.552 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" iface="eth0" netns="" Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.552 [INFO][5574] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.552 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.579 [INFO][5583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" HandleID="k8s-pod-network.9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.579 [INFO][5583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.579 [INFO][5583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.586 [WARNING][5583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" HandleID="k8s-pod-network.9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.586 [INFO][5583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" HandleID="k8s-pod-network.9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Workload="localhost-k8s-goldmane--54d579b49d--95vvq-eth0" Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.588 [INFO][5583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:24.599174 containerd[1459]: 2025-09-12 17:33:24.595 [INFO][5574] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea" Sep 12 17:33:24.599174 containerd[1459]: time="2025-09-12T17:33:24.599114386Z" level=info msg="TearDown network for sandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\" successfully" Sep 12 17:33:24.790929 containerd[1459]: time="2025-09-12T17:33:24.790299870Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:33:24.790929 containerd[1459]: time="2025-09-12T17:33:24.790395672Z" level=info msg="RemovePodSandbox \"9e1f28c3ac3df075e0bdbcf543e8d8e0cbc2a5b5aff782f682956e9406aee8ea\" returns successfully" Sep 12 17:33:24.791088 containerd[1459]: time="2025-09-12T17:33:24.790975805Z" level=info msg="StopPodSandbox for \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\"" Sep 12 17:33:24.865657 containerd[1459]: time="2025-09-12T17:33:24.864962664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:24.867109 containerd[1459]: time="2025-09-12T17:33:24.867044682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:33:24.868155 containerd[1459]: time="2025-09-12T17:33:24.868129514Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.831 [WARNING][5600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0", GenerateName:"calico-kube-controllers-8b99944df-", Namespace:"calico-system", SelfLink:"", UID:"a81211fa-8f2f-44d0-8a0d-86148fe788e8", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b99944df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3", Pod:"calico-kube-controllers-8b99944df-4v9pg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali038598ee7cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.831 [INFO][5600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.832 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" iface="eth0" netns="" Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.832 [INFO][5600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.832 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.855 [INFO][5609] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" HandleID="k8s-pod-network.6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.856 [INFO][5609] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.856 [INFO][5609] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.862 [WARNING][5609] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" HandleID="k8s-pod-network.6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.862 [INFO][5609] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" HandleID="k8s-pod-network.6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.864 [INFO][5609] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:24.869653 containerd[1459]: 2025-09-12 17:33:24.866 [INFO][5600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:24.870084 containerd[1459]: time="2025-09-12T17:33:24.869680071Z" level=info msg="TearDown network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\" successfully" Sep 12 17:33:24.870084 containerd[1459]: time="2025-09-12T17:33:24.869706923Z" level=info msg="StopPodSandbox for \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\" returns successfully" Sep 12 17:33:24.870681 containerd[1459]: time="2025-09-12T17:33:24.870623414Z" level=info msg="RemovePodSandbox for \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\"" Sep 12 17:33:24.870681 containerd[1459]: time="2025-09-12T17:33:24.870666095Z" level=info msg="Forcibly stopping sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\"" Sep 12 17:33:24.870796 containerd[1459]: time="2025-09-12T17:33:24.870696693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:24.871361 containerd[1459]: time="2025-09-12T17:33:24.871322883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.636545222s" Sep 12 17:33:24.871361 containerd[1459]: time="2025-09-12T17:33:24.871354734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:33:24.872790 containerd[1459]: time="2025-09-12T17:33:24.872757380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:33:24.874300 containerd[1459]: time="2025-09-12T17:33:24.874266158Z" level=info msg="CreateContainer within sandbox \"46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:33:24.889706 containerd[1459]: time="2025-09-12T17:33:24.889544207Z" level=info msg="CreateContainer within sandbox \"46fddb4e9f1c7ab717fad7fad4e33ca27f0626c5d61d33923e18364916daf891\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2c93a8462fde435638a065ff58b721809cfdb384f441137b222f4b110da6ebe3\"" Sep 12 17:33:24.891739 containerd[1459]: time="2025-09-12T17:33:24.890763375Z" level=info msg="StartContainer for \"2c93a8462fde435638a065ff58b721809cfdb384f441137b222f4b110da6ebe3\"" Sep 12 17:33:24.923997 systemd[1]: Started cri-containerd-2c93a8462fde435638a065ff58b721809cfdb384f441137b222f4b110da6ebe3.scope - libcontainer container 2c93a8462fde435638a065ff58b721809cfdb384f441137b222f4b110da6ebe3. Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.918 [WARNING][5630] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0", GenerateName:"calico-kube-controllers-8b99944df-", Namespace:"calico-system", SelfLink:"", UID:"a81211fa-8f2f-44d0-8a0d-86148fe788e8", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b99944df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c65be40a480eabce1232fc677f296e5ba60e0dbcd7fe8640353ea0ba2158df3", Pod:"calico-kube-controllers-8b99944df-4v9pg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali038598ee7cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.918 [INFO][5630] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.918 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" iface="eth0" netns="" Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.918 [INFO][5630] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.918 [INFO][5630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.954 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" HandleID="k8s-pod-network.6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.954 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.954 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.961 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" HandleID="k8s-pod-network.6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.961 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" HandleID="k8s-pod-network.6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Workload="localhost-k8s-calico--kube--controllers--8b99944df--4v9pg-eth0" Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.966 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:24.973026 containerd[1459]: 2025-09-12 17:33:24.969 [INFO][5630] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5" Sep 12 17:33:24.973026 containerd[1459]: time="2025-09-12T17:33:24.972244359Z" level=info msg="TearDown network for sandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\" successfully" Sep 12 17:33:25.529983 containerd[1459]: time="2025-09-12T17:33:25.529907253Z" level=info msg="StartContainer for \"2c93a8462fde435638a065ff58b721809cfdb384f441137b222f4b110da6ebe3\" returns successfully" Sep 12 17:33:25.538443 containerd[1459]: time="2025-09-12T17:33:25.538373595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:33:25.538638 containerd[1459]: time="2025-09-12T17:33:25.538458176Z" level=info msg="RemovePodSandbox \"6c4e30edea325e901f12f271375b03f2f7aceb05c75ae1be9d0c4d5a368677f5\" returns successfully" Sep 12 17:33:25.539041 containerd[1459]: time="2025-09-12T17:33:25.538994094Z" level=info msg="StopPodSandbox for \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\"" Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.575 [WARNING][5697] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9139155a-95e6-4e1d-8bf7-6c551b8fbb1a", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855", Pod:"coredns-668d6bf9bc-bbp9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali476f6c75536", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.575 [INFO][5697] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.576 [INFO][5697] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" iface="eth0" netns="" Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.576 [INFO][5697] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.576 [INFO][5697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.608 [INFO][5706] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" HandleID="k8s-pod-network.3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.608 [INFO][5706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.608 [INFO][5706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.614 [WARNING][5706] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" HandleID="k8s-pod-network.3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.614 [INFO][5706] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" HandleID="k8s-pod-network.3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.618 [INFO][5706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:25.624692 containerd[1459]: 2025-09-12 17:33:25.621 [INFO][5697] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:25.625527 containerd[1459]: time="2025-09-12T17:33:25.624748681Z" level=info msg="TearDown network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\" successfully" Sep 12 17:33:25.625527 containerd[1459]: time="2025-09-12T17:33:25.624775803Z" level=info msg="StopPodSandbox for \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\" returns successfully" Sep 12 17:33:25.625921 containerd[1459]: time="2025-09-12T17:33:25.625835447Z" level=info msg="RemovePodSandbox for \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\"" Sep 12 17:33:25.625921 containerd[1459]: time="2025-09-12T17:33:25.625919137Z" level=info msg="Forcibly stopping sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\"" Sep 12 17:33:25.650305 kubelet[2504]: I0912 17:33:25.650220 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6778dc6895-shqls" podStartSLOduration=32.06755344 podStartE2EDuration="44.650202823s" podCreationTimestamp="2025-09-12 17:32:41 +0000 UTC" firstStartedPulling="2025-09-12 17:33:12.289915961 +0000 UTC m=+48.831870147" lastFinishedPulling="2025-09-12 17:33:24.872565345 +0000 UTC m=+61.414519530" observedRunningTime="2025-09-12 17:33:25.650124104 +0000 UTC m=+62.192078289" watchObservedRunningTime="2025-09-12 17:33:25.650202823 +0000 UTC m=+62.192156998" Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.672 [WARNING][5724] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9139155a-95e6-4e1d-8bf7-6c551b8fbb1a", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4b711532cee184dd0d2ca0d189764876e68c2d8242e14f76ae890e55593b855", Pod:"coredns-668d6bf9bc-bbp9v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali476f6c75536", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.673 [INFO][5724] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.673 [INFO][5724] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" iface="eth0" netns="" Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.673 [INFO][5724] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.673 [INFO][5724] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.700 [INFO][5734] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" HandleID="k8s-pod-network.3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.701 [INFO][5734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.701 [INFO][5734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.711 [WARNING][5734] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" HandleID="k8s-pod-network.3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.711 [INFO][5734] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" HandleID="k8s-pod-network.3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Workload="localhost-k8s-coredns--668d6bf9bc--bbp9v-eth0" Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.712 [INFO][5734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:33:25.718654 containerd[1459]: 2025-09-12 17:33:25.715 [INFO][5724] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c" Sep 12 17:33:25.719199 containerd[1459]: time="2025-09-12T17:33:25.718700681Z" level=info msg="TearDown network for sandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\" successfully" Sep 12 17:33:25.723048 containerd[1459]: time="2025-09-12T17:33:25.723015682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:33:25.723146 containerd[1459]: time="2025-09-12T17:33:25.723071287Z" level=info msg="RemovePodSandbox \"3c5b0301db9eaec4e6de0f4ec529975cf1b26015343480cec9fffacab42cfd9c\" returns successfully" Sep 12 17:33:26.790205 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:48554.service - OpenSSH per-connection server daemon (10.0.0.1:48554). Sep 12 17:33:26.853974 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 48554 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:26.855776 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:26.860615 systemd-logind[1442]: New session 14 of user core. Sep 12 17:33:26.868013 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:33:27.069248 containerd[1459]: time="2025-09-12T17:33:27.069186893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:27.070052 containerd[1459]: time="2025-09-12T17:33:27.069991823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:33:27.073637 containerd[1459]: time="2025-09-12T17:33:27.071923925Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:27.072343 sshd[5748]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:27.074503 containerd[1459]: time="2025-09-12T17:33:27.074468743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:27.075179 containerd[1459]: time="2025-09-12T17:33:27.075150498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.202360826s" Sep 12 17:33:27.075234 containerd[1459]: time="2025-09-12T17:33:27.075184924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:33:27.076169 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:33:27.076497 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:48554.service: Deactivated successfully. Sep 12 17:33:27.079722 containerd[1459]: time="2025-09-12T17:33:27.079009724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:33:27.079051 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:33:27.080902 containerd[1459]: time="2025-09-12T17:33:27.080509193Z" level=info msg="CreateContainer within sandbox \"74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:33:27.081449 systemd-logind[1442]: Removed session 14. Sep 12 17:33:27.113297 containerd[1459]: time="2025-09-12T17:33:27.113235167Z" level=info msg="CreateContainer within sandbox \"74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0e3d78b55d6f1ae1cde4d6311bf390d6bb3b11bf5495392b0420a74f9b9aca96\"" Sep 12 17:33:27.114356 containerd[1459]: time="2025-09-12T17:33:27.114311493Z" level=info msg="StartContainer for \"0e3d78b55d6f1ae1cde4d6311bf390d6bb3b11bf5495392b0420a74f9b9aca96\"" Sep 12 17:33:27.176066 systemd[1]: Started cri-containerd-0e3d78b55d6f1ae1cde4d6311bf390d6bb3b11bf5495392b0420a74f9b9aca96.scope - libcontainer container 0e3d78b55d6f1ae1cde4d6311bf390d6bb3b11bf5495392b0420a74f9b9aca96. Sep 12 17:33:27.209306 containerd[1459]: time="2025-09-12T17:33:27.209255637Z" level=info msg="StartContainer for \"0e3d78b55d6f1ae1cde4d6311bf390d6bb3b11bf5495392b0420a74f9b9aca96\" returns successfully" Sep 12 17:33:27.469336 containerd[1459]: time="2025-09-12T17:33:27.469150188Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:27.470416 containerd[1459]: time="2025-09-12T17:33:27.470359517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:33:27.472615 containerd[1459]: time="2025-09-12T17:33:27.472540722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 393.48903ms" Sep 12 17:33:27.472615 containerd[1459]: time="2025-09-12T17:33:27.472609483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:33:27.473700 containerd[1459]: time="2025-09-12T17:33:27.473673396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:33:27.474893 containerd[1459]: time="2025-09-12T17:33:27.474871744Z" level=info msg="CreateContainer within sandbox \"b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:33:27.488437 containerd[1459]: time="2025-09-12T17:33:27.488378687Z" level=info msg="CreateContainer within sandbox \"b38764d175476058c6fcd6e699b7553d827bae88fe7c7d1ab06e2031246e06c7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cd1afe4f17cbdffa3a24dbd68cf288de36747de2854a6d16c8331e2e3930ef8c\"" Sep 12 17:33:27.488985 containerd[1459]: time="2025-09-12T17:33:27.488957176Z" level=info msg="StartContainer for \"cd1afe4f17cbdffa3a24dbd68cf288de36747de2854a6d16c8331e2e3930ef8c\"" Sep 12 17:33:27.522060 systemd[1]: Started cri-containerd-cd1afe4f17cbdffa3a24dbd68cf288de36747de2854a6d16c8331e2e3930ef8c.scope - libcontainer container cd1afe4f17cbdffa3a24dbd68cf288de36747de2854a6d16c8331e2e3930ef8c. Sep 12 17:33:27.572535 containerd[1459]: time="2025-09-12T17:33:27.572489269Z" level=info msg="StartContainer for \"cd1afe4f17cbdffa3a24dbd68cf288de36747de2854a6d16c8331e2e3930ef8c\" returns successfully" Sep 12 17:33:27.658508 kubelet[2504]: I0912 17:33:27.657322 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6778dc6895-fdvxx" podStartSLOduration=35.089917155 podStartE2EDuration="46.657300593s" podCreationTimestamp="2025-09-12 17:32:41 +0000 UTC" firstStartedPulling="2025-09-12 17:33:15.90605391 +0000 UTC m=+52.448008095" lastFinishedPulling="2025-09-12 17:33:27.473437347 +0000 UTC m=+64.015391533" observedRunningTime="2025-09-12 17:33:27.657132042 +0000 UTC m=+64.199086227" watchObservedRunningTime="2025-09-12 17:33:27.657300593 +0000 UTC m=+64.199254778" Sep 12 17:33:28.648875 kubelet[2504]: I0912 17:33:28.648810 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:33:32.087162 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:59770.service - OpenSSH per-connection server daemon (10.0.0.1:59770). Sep 12 17:33:32.160647 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 59770 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:32.162893 sshd[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:32.173189 systemd-logind[1442]: New session 15 of user core. Sep 12 17:33:32.185006 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:33:32.409377 sshd[5858]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:32.413818 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:59770.service: Deactivated successfully. Sep 12 17:33:32.416301 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:33:32.417030 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:33:32.418124 systemd-logind[1442]: Removed session 15. Sep 12 17:33:33.613862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3579503080.mount: Deactivated successfully. Sep 12 17:33:33.718555 containerd[1459]: time="2025-09-12T17:33:33.718454713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:33:33.722764 containerd[1459]: time="2025-09-12T17:33:33.722702577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 6.248993705s" Sep 12 17:33:33.722764 containerd[1459]: time="2025-09-12T17:33:33.722749827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:33:33.724945 containerd[1459]: time="2025-09-12T17:33:33.724617356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:33:33.727322 containerd[1459]: time="2025-09-12T17:33:33.727280078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:33.728081 containerd[1459]: time="2025-09-12T17:33:33.728058567Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:33.730435 containerd[1459]: time="2025-09-12T17:33:33.729272504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:33.734501 containerd[1459]: time="2025-09-12T17:33:33.734443794Z" level=info msg="CreateContainer within sandbox \"c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:33:33.855703 containerd[1459]: time="2025-09-12T17:33:33.855629765Z" level=info msg="CreateContainer within sandbox \"c6a50fe287d23c34979ef13809d6ce5047d2e4496c657f49aa0d455c0497c871\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b3c38b67cda086aebdc5cb690c51a89b72a4ebbd6adcf7b2d664314639fb158e\"" Sep 12 17:33:33.856376 containerd[1459]: time="2025-09-12T17:33:33.856339535Z" level=info msg="StartContainer for \"b3c38b67cda086aebdc5cb690c51a89b72a4ebbd6adcf7b2d664314639fb158e\"" Sep 12 17:33:33.888014 systemd[1]: Started cri-containerd-b3c38b67cda086aebdc5cb690c51a89b72a4ebbd6adcf7b2d664314639fb158e.scope - libcontainer container b3c38b67cda086aebdc5cb690c51a89b72a4ebbd6adcf7b2d664314639fb158e. Sep 12 17:33:34.089627 containerd[1459]: time="2025-09-12T17:33:34.089553129Z" level=info msg="StartContainer for \"b3c38b67cda086aebdc5cb690c51a89b72a4ebbd6adcf7b2d664314639fb158e\" returns successfully" Sep 12 17:33:35.255228 containerd[1459]: time="2025-09-12T17:33:35.255161110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:35.256173 containerd[1459]: time="2025-09-12T17:33:35.255765939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:33:35.257026 containerd[1459]: time="2025-09-12T17:33:35.256994655Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:35.259275 containerd[1459]: time="2025-09-12T17:33:35.259239031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:35.259920 containerd[1459]: time="2025-09-12T17:33:35.259886683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.535226845s" Sep 12 17:33:35.259957 containerd[1459]: time="2025-09-12T17:33:35.259922691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:33:35.261975 containerd[1459]: time="2025-09-12T17:33:35.261947049Z" level=info msg="CreateContainer within sandbox \"74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:33:35.284501 containerd[1459]: time="2025-09-12T17:33:35.284450381Z" level=info msg="CreateContainer within sandbox \"74dd907a1c67e63de848d5195d2c4e1c0b176520640037ec77fa1ac97482628d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f50790ddd5da65ada8bf7aba13fae9c022617389597c21fda3dce277ed7ed526\"" Sep 12 17:33:35.285114 containerd[1459]: time="2025-09-12T17:33:35.285084136Z" level=info msg="StartContainer for \"f50790ddd5da65ada8bf7aba13fae9c022617389597c21fda3dce277ed7ed526\"" Sep 12 17:33:35.327010 systemd[1]: Started cri-containerd-f50790ddd5da65ada8bf7aba13fae9c022617389597c21fda3dce277ed7ed526.scope - libcontainer container f50790ddd5da65ada8bf7aba13fae9c022617389597c21fda3dce277ed7ed526. Sep 12 17:33:35.359235 containerd[1459]: time="2025-09-12T17:33:35.359185893Z" level=info msg="StartContainer for \"f50790ddd5da65ada8bf7aba13fae9c022617389597c21fda3dce277ed7ed526\" returns successfully" Sep 12 17:33:35.501417 kubelet[2504]: I0912 17:33:35.501358 2504 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:33:35.501417 kubelet[2504]: I0912 17:33:35.501407 2504 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:33:35.679930 kubelet[2504]: I0912 17:33:35.679841 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-x4vh8" podStartSLOduration=30.471040271 podStartE2EDuration="51.679816463s" podCreationTimestamp="2025-09-12 17:32:44 +0000 UTC" firstStartedPulling="2025-09-12 17:33:14.051838954 +0000 UTC m=+50.593793139" lastFinishedPulling="2025-09-12 17:33:35.260615146 +0000 UTC m=+71.802569331" observedRunningTime="2025-09-12 17:33:35.677506863 +0000 UTC m=+72.219461068" watchObservedRunningTime="2025-09-12 17:33:35.679816463 +0000 UTC m=+72.221770648" Sep 12 17:33:35.680420 kubelet[2504]: I0912 17:33:35.680170 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8496b6457d-dq9gb" podStartSLOduration=3.807996469 podStartE2EDuration="25.680164835s" podCreationTimestamp="2025-09-12 17:33:10 +0000 UTC" firstStartedPulling="2025-09-12 17:33:11.851603121 +0000 UTC m=+48.393557307" lastFinishedPulling="2025-09-12 17:33:33.723771488 +0000 UTC m=+70.265725673" observedRunningTime="2025-09-12 17:33:34.679650353 +0000 UTC m=+71.221604528" watchObservedRunningTime="2025-09-12 17:33:35.680164835 +0000 UTC m=+72.222119020" Sep 12 17:33:37.425914 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:59772.service - OpenSSH per-connection server daemon (10.0.0.1:59772). Sep 12 17:33:37.476566 sshd[5966]: Accepted publickey for core from 10.0.0.1 port 59772 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:37.478363 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:37.482935 systemd-logind[1442]: New session 16 of user core. Sep 12 17:33:37.492016 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:33:37.731923 sshd[5966]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:37.736615 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:59772.service: Deactivated successfully. Sep 12 17:33:37.738629 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:33:37.739360 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:33:37.740272 systemd-logind[1442]: Removed session 16. Sep 12 17:33:42.553237 kubelet[2504]: E0912 17:33:42.553180 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:42.759272 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:35848.service - OpenSSH per-connection server daemon (10.0.0.1:35848). Sep 12 17:33:42.795407 sshd[6001]: Accepted publickey for core from 10.0.0.1 port 35848 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:42.797247 sshd[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:42.802226 systemd-logind[1442]: New session 17 of user core. Sep 12 17:33:42.807994 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:33:42.955382 sshd[6001]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:42.959832 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:35848.service: Deactivated successfully. Sep 12 17:33:42.962247 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:33:42.962954 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:33:42.964020 systemd-logind[1442]: Removed session 17. Sep 12 17:33:43.556192 kubelet[2504]: E0912 17:33:43.556151 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:47.972310 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:35852.service - OpenSSH per-connection server daemon (10.0.0.1:35852). Sep 12 17:33:48.005414 sshd[6036]: Accepted publickey for core from 10.0.0.1 port 35852 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:48.007368 sshd[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:48.011543 systemd-logind[1442]: New session 18 of user core. Sep 12 17:33:48.021992 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:33:48.190202 sshd[6036]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:48.198107 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:35852.service: Deactivated successfully. Sep 12 17:33:48.200294 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:33:48.201206 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:33:48.216112 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:35864.service - OpenSSH per-connection server daemon (10.0.0.1:35864). Sep 12 17:33:48.216712 systemd-logind[1442]: Removed session 18. Sep 12 17:33:48.253199 sshd[6050]: Accepted publickey for core from 10.0.0.1 port 35864 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:48.254832 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:48.259341 systemd-logind[1442]: New session 19 of user core. Sep 12 17:33:48.268990 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:33:48.595008 sshd[6050]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:48.605785 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:35864.service: Deactivated successfully. Sep 12 17:33:48.607687 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:33:48.609047 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:33:48.616111 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:35870.service - OpenSSH per-connection server daemon (10.0.0.1:35870). Sep 12 17:33:48.617097 systemd-logind[1442]: Removed session 19. Sep 12 17:33:48.649090 sshd[6063]: Accepted publickey for core from 10.0.0.1 port 35870 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:48.650727 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:48.654981 systemd-logind[1442]: New session 20 of user core. Sep 12 17:33:48.661978 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:33:49.236080 sshd[6063]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:49.246878 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:35870.service: Deactivated successfully. Sep 12 17:33:49.249251 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:33:49.252006 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:33:49.262123 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:35878.service - OpenSSH per-connection server daemon (10.0.0.1:35878). Sep 12 17:33:49.264059 systemd-logind[1442]: Removed session 20. Sep 12 17:33:49.310904 sshd[6090]: Accepted publickey for core from 10.0.0.1 port 35878 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:49.312776 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:49.317721 systemd-logind[1442]: New session 21 of user core. Sep 12 17:33:49.330035 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:33:49.757522 sshd[6090]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:49.779799 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:35878.service: Deactivated successfully. Sep 12 17:33:49.784306 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:33:49.788123 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:33:49.797753 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:35884.service - OpenSSH per-connection server daemon (10.0.0.1:35884). Sep 12 17:33:49.798943 systemd-logind[1442]: Removed session 21. Sep 12 17:33:49.836893 sshd[6126]: Accepted publickey for core from 10.0.0.1 port 35884 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:49.837616 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:49.842416 systemd-logind[1442]: New session 22 of user core. Sep 12 17:33:49.855011 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:33:49.970867 sshd[6126]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:49.975178 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:35884.service: Deactivated successfully. Sep 12 17:33:49.977423 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:33:49.978451 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:33:49.979394 systemd-logind[1442]: Removed session 22. Sep 12 17:33:54.983732 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:59638.service - OpenSSH per-connection server daemon (10.0.0.1:59638). Sep 12 17:33:55.003927 kubelet[2504]: I0912 17:33:55.003879 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:33:55.037826 sshd[6150]: Accepted publickey for core from 10.0.0.1 port 59638 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:33:55.041253 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:55.049054 systemd-logind[1442]: New session 23 of user core. Sep 12 17:33:55.059019 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:33:55.234117 sshd[6150]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:55.239014 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:59638.service: Deactivated successfully. Sep 12 17:33:55.241378 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:33:55.242148 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:33:55.243163 systemd-logind[1442]: Removed session 23. Sep 12 17:33:56.552287 kubelet[2504]: E0912 17:33:56.552220 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:00.252651 systemd[1]: Started sshd@23-10.0.0.64:22-10.0.0.1:54972.service - OpenSSH per-connection server daemon (10.0.0.1:54972). Sep 12 17:34:00.290554 sshd[6187]: Accepted publickey for core from 10.0.0.1 port 54972 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:00.292678 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:00.298768 systemd-logind[1442]: New session 24 of user core. Sep 12 17:34:00.307012 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:34:00.479141 sshd[6187]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:00.483499 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:34:00.484039 systemd[1]: sshd@23-10.0.0.64:22-10.0.0.1:54972.service: Deactivated successfully. Sep 12 17:34:00.486544 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:34:00.487518 systemd-logind[1442]: Removed session 24. Sep 12 17:34:01.553203 kubelet[2504]: E0912 17:34:01.553158 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:34:05.506325 systemd[1]: Started sshd@24-10.0.0.64:22-10.0.0.1:54974.service - OpenSSH per-connection server daemon (10.0.0.1:54974). Sep 12 17:34:05.547128 sshd[6203]: Accepted publickey for core from 10.0.0.1 port 54974 ssh2: RSA SHA256:aT8LBpGR61nZrCvZPSZnf5qAHr/gCw9azCt0c3x8FJc Sep 12 17:34:05.549234 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:34:05.556828 systemd-logind[1442]: New session 25 of user core. Sep 12 17:34:05.563989 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:34:05.903039 sshd[6203]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:05.908174 systemd[1]: sshd@24-10.0.0.64:22-10.0.0.1:54974.service: Deactivated successfully. Sep 12 17:34:05.910919 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:34:05.911621 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:34:05.912882 systemd-logind[1442]: Removed session 25.