Nov 8 00:19:39.125107 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:19:39.125145 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:19:39.125170 kernel: BIOS-provided physical RAM map: Nov 8 00:19:39.125186 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:19:39.125201 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:19:39.125214 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:19:39.125231 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 8 00:19:39.125246 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 8 00:19:39.125261 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:19:39.125281 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:19:39.125298 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:19:39.125312 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:19:39.125334 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:19:39.125350 kernel: NX (Execute Disable) protection: active Nov 8 00:19:39.125367 kernel: APIC: Static calls initialized Nov 8 00:19:39.125395 kernel: SMBIOS 2.8 present. Nov 8 00:19:39.125412 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 8 00:19:39.125427 kernel: Hypervisor detected: KVM Nov 8 00:19:39.125441 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:19:39.125458 kernel: kvm-clock: using sched offset of 3188638586 cycles Nov 8 00:19:39.125473 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:19:39.125492 kernel: tsc: Detected 2794.748 MHz processor Nov 8 00:19:39.125507 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:19:39.125524 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:19:39.125541 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 8 00:19:39.125565 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:19:39.125583 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:19:39.125600 kernel: Using GB pages for direct mapping Nov 8 00:19:39.125616 kernel: ACPI: Early table checksum verification disabled Nov 8 00:19:39.125633 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 8 00:19:39.125648 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:39.125665 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:39.125680 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:39.125703 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 8 00:19:39.125719 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:39.125736 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:39.125753 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:39.125769 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:19:39.125786 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 8 00:19:39.125799 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 8 00:19:39.125822 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 8 00:19:39.125846 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 8 00:19:39.125894 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 8 00:19:39.125913 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 8 00:19:39.125931 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 8 00:19:39.125947 kernel: No NUMA configuration found Nov 8 00:19:39.125973 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 8 00:19:39.125991 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 8 00:19:39.126016 kernel: Zone ranges: Nov 8 00:19:39.126034 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:19:39.126051 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 8 00:19:39.126067 kernel: Normal empty Nov 8 00:19:39.126083 kernel: Movable zone start for each node Nov 8 00:19:39.126102 kernel: Early memory node ranges Nov 8 00:19:39.126114 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:19:39.126131 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 8 00:19:39.126142 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 8 00:19:39.126158 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:19:39.126173 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:19:39.126184 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 8 00:19:39.126194 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:19:39.126204 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:19:39.126214 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:19:39.126224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:19:39.126239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:19:39.126257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:19:39.126281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:19:39.126298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:19:39.126314 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:19:39.126331 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:19:39.126348 kernel: TSC deadline timer available Nov 8 00:19:39.126364 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 8 00:19:39.126381 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:19:39.126397 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:19:39.126422 kernel: kvm-guest: setup PV sched yield Nov 8 00:19:39.126445 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:19:39.126463 kernel: Booting paravirtualized kernel on KVM Nov 8 00:19:39.126481 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:19:39.126496 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 8 00:19:39.126511 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 8 00:19:39.126522 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 8 00:19:39.126532 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 8 00:19:39.126545 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:19:39.126559 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:19:39.126576 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:19:39.126594 kernel: random: crng init done Nov 8 00:19:39.126609 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:19:39.126622 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:19:39.126634 kernel: Fallback order for Node 0: 0 Nov 8 00:19:39.126652 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 8 00:19:39.126666 kernel: Policy zone: DMA32 Nov 8 00:19:39.126681 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:19:39.126706 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 136900K reserved, 0K cma-reserved) Nov 8 00:19:39.126723 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:19:39.126739 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:19:39.126755 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:19:39.126772 kernel: Dynamic Preempt: voluntary Nov 8 00:19:39.126788 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:19:39.126806 kernel: rcu: RCU event tracing is enabled. Nov 8 00:19:39.126824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:19:39.126838 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:19:39.126883 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:19:39.126902 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:19:39.126919 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:19:39.126935 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:19:39.126958 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 8 00:19:39.126982 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:19:39.126998 kernel: Console: colour VGA+ 80x25 Nov 8 00:19:39.127015 kernel: printk: console [ttyS0] enabled Nov 8 00:19:39.127034 kernel: ACPI: Core revision 20230628 Nov 8 00:19:39.127051 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:19:39.127075 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:19:39.127091 kernel: x2apic enabled Nov 8 00:19:39.127108 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:19:39.127126 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:19:39.127143 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:19:39.127159 kernel: kvm-guest: setup PV IPIs Nov 8 00:19:39.127177 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:19:39.127213 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:19:39.127231 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 8 00:19:39.127248 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:19:39.127266 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:19:39.127289 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:19:39.127307 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:19:39.127324 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:19:39.127341 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:19:39.127360 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:19:39.127383 kernel: active return thunk: retbleed_return_thunk Nov 8 00:19:39.127403 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:19:39.127429 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:19:39.127451 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:19:39.127474 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:19:39.127497 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:19:39.127520 kernel: active return thunk: srso_return_thunk Nov 8 00:19:39.127542 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:19:39.127570 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:19:39.127593 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:19:39.127616 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:19:39.127637 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:19:39.127654 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:19:39.127673 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:19:39.127691 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:19:39.127709 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:19:39.127726 kernel: landlock: Up and running. Nov 8 00:19:39.127751 kernel: SELinux: Initializing. Nov 8 00:19:39.127768 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:19:39.127785 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:19:39.127803 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:19:39.127821 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:19:39.127839 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:19:39.127856 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:19:39.127900 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:19:39.127924 kernel: ... version: 0 Nov 8 00:19:39.127949 kernel: ... bit width: 48 Nov 8 00:19:39.127977 kernel: ... generic registers: 6 Nov 8 00:19:39.127992 kernel: ... value mask: 0000ffffffffffff Nov 8 00:19:39.128010 kernel: ... max period: 00007fffffffffff Nov 8 00:19:39.128027 kernel: ... fixed-purpose events: 0 Nov 8 00:19:39.128045 kernel: ... event mask: 000000000000003f Nov 8 00:19:39.128062 kernel: signal: max sigframe size: 1776 Nov 8 00:19:39.128081 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:19:39.128100 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:19:39.128122 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:19:39.128142 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:19:39.128161 kernel: .... node #0, CPUs: #1 #2 #3 Nov 8 00:19:39.128178 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:19:39.128196 kernel: smpboot: Max logical packages: 1 Nov 8 00:19:39.128214 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 8 00:19:39.128228 kernel: devtmpfs: initialized Nov 8 00:19:39.128243 kernel: x86/mm: Memory block size: 128MB Nov 8 00:19:39.128256 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:19:39.128266 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:19:39.128281 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:19:39.128290 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:19:39.128300 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:19:39.128309 kernel: audit: type=2000 audit(1762561177.584:1): state=initialized audit_enabled=0 res=1 Nov 8 00:19:39.128319 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:19:39.128328 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:19:39.128337 kernel: cpuidle: using governor menu Nov 8 00:19:39.128347 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:19:39.128356 kernel: dca service started, version 1.12.1 Nov 8 00:19:39.128369 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:19:39.128379 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:19:39.128388 kernel: PCI: Using configuration type 1 for base access Nov 8 00:19:39.128398 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:19:39.128408 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:19:39.128417 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:19:39.128427 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:19:39.128441 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:19:39.128453 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:19:39.128463 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:19:39.128472 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:19:39.128482 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:19:39.128491 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:19:39.128500 kernel: ACPI: Interpreter enabled Nov 8 00:19:39.128516 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:19:39.128535 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:19:39.128551 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:19:39.128569 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:19:39.128593 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:19:39.128612 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:19:39.128997 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:19:39.129249 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:19:39.129496 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:19:39.129517 kernel: PCI host bridge to bus 0000:00 Nov 8 00:19:39.129776 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:19:39.130045 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:19:39.130254 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:19:39.130474 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 8 00:19:39.130692 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:19:39.130942 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 8 00:19:39.131174 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:19:39.131471 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:19:39.131749 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:19:39.132032 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:19:39.132268 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:19:39.132582 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:19:39.132822 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:19:39.133136 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:19:39.133393 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 8 00:19:39.133683 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:19:39.133945 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:19:39.134216 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:19:39.134460 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 00:19:39.134706 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:19:39.135072 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:19:39.135421 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:19:39.135769 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 8 00:19:39.136014 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:19:39.136185 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 8 00:19:39.136361 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:19:39.136581 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:19:39.136833 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:19:39.137164 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:19:39.137473 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 8 00:19:39.137796 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 8 00:19:39.138094 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:19:39.138325 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:19:39.138348 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:19:39.138375 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:19:39.138393 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:19:39.138411 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:19:39.138430 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:19:39.138452 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:19:39.138474 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:19:39.138496 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:19:39.138519 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:19:39.138538 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:19:39.138575 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:19:39.138632 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:19:39.138669 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:19:39.138689 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:19:39.138706 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:19:39.138724 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:19:39.138750 kernel: iommu: Default domain type: Translated Nov 8 00:19:39.138769 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:19:39.138787 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:19:39.138812 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:19:39.138829 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:19:39.138846 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 8 00:19:39.139118 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:19:39.139360 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:19:39.139599 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:19:39.139622 kernel: vgaarb: loaded Nov 8 00:19:39.139641 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:19:39.139659 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:19:39.139684 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:19:39.139701 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:19:39.139719 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:19:39.139737 kernel: pnp: PnP ACPI init Nov 8 00:19:39.140043 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:19:39.140061 kernel: pnp: PnP ACPI: found 6 devices Nov 8 00:19:39.140072 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:19:39.140082 kernel: NET: Registered PF_INET protocol family Nov 8 00:19:39.140097 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:19:39.140107 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:19:39.140117 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:19:39.140127 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:19:39.140136 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:19:39.140146 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:19:39.140155 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:19:39.140165 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:19:39.140174 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:19:39.140187 kernel: NET: Registered PF_XDP protocol family Nov 8 00:19:39.140372 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:19:39.140595 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:19:39.140820 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:19:39.141076 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 8 00:19:39.141300 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:19:39.141522 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 8 00:19:39.141545 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:19:39.141570 kernel: Initialise system trusted keyrings Nov 8 00:19:39.141588 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:19:39.141603 kernel: Key type asymmetric registered Nov 8 00:19:39.141622 kernel: Asymmetric key parser 'x509' registered Nov 8 00:19:39.141642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:19:39.141660 kernel: io scheduler mq-deadline registered Nov 8 00:19:39.141677 kernel: io scheduler kyber registered Nov 8 00:19:39.141695 kernel: io scheduler bfq registered Nov 8 00:19:39.141712 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:19:39.141737 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:19:39.141756 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:19:39.141774 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:19:39.141792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:19:39.141810 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:19:39.141829 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:19:39.141847 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:19:39.141913 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:19:39.142164 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 8 00:19:39.142194 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:19:39.142432 kernel: rtc_cmos 00:04: registered as rtc0 Nov 8 00:19:39.142668 kernel: rtc_cmos 00:04: setting system clock to 2025-11-08T00:19:38 UTC (1762561178) Nov 8 00:19:39.142917 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:19:39.142939 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:19:39.142957 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:19:39.142986 kernel: Segment Routing with IPv6 Nov 8 00:19:39.143003 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:19:39.143029 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:19:39.143047 kernel: Key type dns_resolver registered Nov 8 00:19:39.143065 kernel: IPI shorthand broadcast: enabled Nov 8 00:19:39.143083 kernel: sched_clock: Marking stable (946002948, 196674177)->(1280389217, -137712092) Nov 8 00:19:39.143101 kernel: registered taskstats version 1 Nov 8 00:19:39.143119 kernel: Loading compiled-in X.509 certificates Nov 8 00:19:39.143136 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:19:39.143155 kernel: Key type .fscrypt registered Nov 8 00:19:39.143173 kernel: Key type fscrypt-provisioning registered Nov 8 00:19:39.143198 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:19:39.143215 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:19:39.143232 kernel: ima: No architecture policies found Nov 8 00:19:39.143249 kernel: clk: Disabling unused clocks Nov 8 00:19:39.143267 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:19:39.143285 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:19:39.143303 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:19:39.143321 kernel: Run /init as init process Nov 8 00:19:39.143338 kernel: with arguments: Nov 8 00:19:39.143362 kernel: /init Nov 8 00:19:39.143381 kernel: with environment: Nov 8 00:19:39.143398 kernel: HOME=/ Nov 8 00:19:39.143415 kernel: TERM=linux Nov 8 00:19:39.143438 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:19:39.143465 systemd[1]: Detected virtualization kvm. Nov 8 00:19:39.143490 systemd[1]: Detected architecture x86-64. Nov 8 00:19:39.143517 systemd[1]: Running in initrd. Nov 8 00:19:39.143550 systemd[1]: No hostname configured, using default hostname. Nov 8 00:19:39.143574 systemd[1]: Hostname set to . Nov 8 00:19:39.143599 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:19:39.143624 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:19:39.143648 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:19:39.143671 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:19:39.143693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:19:39.143713 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:19:39.143740 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:19:39.143787 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:19:39.143818 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:19:39.143839 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:19:39.143881 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:19:39.143900 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:19:39.143947 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:19:39.143978 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:19:39.143990 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:19:39.144000 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:19:39.144011 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:19:39.144023 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:19:39.144034 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:19:39.144052 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:19:39.144063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:19:39.144074 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:19:39.144084 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:19:39.144095 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:19:39.144105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:19:39.144116 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:19:39.144127 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:19:39.144141 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:19:39.144155 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:19:39.144166 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:19:39.144176 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:19:39.144187 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:19:39.144198 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:19:39.144209 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:19:39.144261 systemd-journald[193]: Collecting audit messages is disabled. Nov 8 00:19:39.144299 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:19:39.144324 systemd-journald[193]: Journal started Nov 8 00:19:39.144355 systemd-journald[193]: Runtime Journal (/run/log/journal/f04d2773bd774d8fa1ef4bdc75f6deae) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:19:39.135756 systemd-modules-load[194]: Inserted module 'overlay' Nov 8 00:19:39.202495 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:19:39.202537 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:19:39.202564 kernel: Bridge firewalling registered Nov 8 00:19:39.165486 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 8 00:19:39.220304 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:19:39.224582 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:19:39.229021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:19:39.248086 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:19:39.252416 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:19:39.254113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:19:39.260027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:19:39.271776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:19:39.276435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:19:39.281490 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:19:39.282647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:19:39.299016 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:19:39.301228 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:19:39.319914 dracut-cmdline[229]: dracut-dracut-053 Nov 8 00:19:39.323189 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:19:39.341900 systemd-resolved[232]: Positive Trust Anchors: Nov 8 00:19:39.341977 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:19:39.342008 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:19:39.344699 systemd-resolved[232]: Defaulting to hostname 'linux'. Nov 8 00:19:39.345936 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:19:39.347487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:19:39.452918 kernel: SCSI subsystem initialized Nov 8 00:19:39.462883 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:19:39.473891 kernel: iscsi: registered transport (tcp) Nov 8 00:19:39.496478 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:19:39.496560 kernel: QLogic iSCSI HBA Driver Nov 8 00:19:39.554560 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:19:39.567033 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:19:39.593950 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:19:39.594018 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:19:39.595598 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:19:39.638903 kernel: raid6: avx2x4 gen() 29866 MB/s Nov 8 00:19:39.655889 kernel: raid6: avx2x2 gen() 30062 MB/s Nov 8 00:19:39.673649 kernel: raid6: avx2x1 gen() 25549 MB/s Nov 8 00:19:39.673687 kernel: raid6: using algorithm avx2x2 gen() 30062 MB/s Nov 8 00:19:39.691677 kernel: raid6: .... xor() 19606 MB/s, rmw enabled Nov 8 00:19:39.691709 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:19:39.712898 kernel: xor: automatically using best checksumming function avx Nov 8 00:19:39.884909 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:19:39.900182 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:19:39.913103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:19:39.925530 systemd-udevd[416]: Using default interface naming scheme 'v255'. Nov 8 00:19:39.930985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:19:39.936942 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:19:39.957950 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Nov 8 00:19:39.994813 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:19:40.015196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:19:40.088051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:19:40.098065 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:19:40.115363 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:19:40.117122 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:19:40.119427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:19:40.120253 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:19:40.136890 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 8 00:19:40.137048 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:19:40.141882 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:19:40.144880 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 8 00:19:40.154414 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:19:40.154480 kernel: GPT:9289727 != 19775487 Nov 8 00:19:40.154495 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:19:40.154506 kernel: GPT:9289727 != 19775487 Nov 8 00:19:40.154516 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:19:40.154527 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:19:40.153746 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:19:40.184894 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:19:40.184954 kernel: AES CTR mode by8 optimization enabled Nov 8 00:19:40.185905 kernel: libata version 3.00 loaded. Nov 8 00:19:40.193889 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (481) Nov 8 00:19:40.196564 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:19:40.196791 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (463) Nov 8 00:19:40.196805 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:19:40.198289 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:19:40.207109 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:19:40.207294 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:19:40.207442 kernel: scsi host0: ahci Nov 8 00:19:40.209893 kernel: scsi host1: ahci Nov 8 00:19:40.210098 kernel: scsi host2: ahci Nov 8 00:19:40.211900 kernel: scsi host3: ahci Nov 8 00:19:40.214531 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:19:40.233062 kernel: scsi host4: ahci Nov 8 00:19:40.233295 kernel: scsi host5: ahci Nov 8 00:19:40.233451 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 8 00:19:40.233468 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 8 00:19:40.233479 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 8 00:19:40.233493 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 8 00:19:40.233506 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 8 00:19:40.233517 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 8 00:19:40.237996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:19:40.245888 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:19:40.250064 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:19:40.267072 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:19:40.268992 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:19:40.271069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:19:40.273855 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:19:40.276821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:19:40.276934 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:19:40.280847 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:19:40.286150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:19:40.379716 disk-uuid[566]: Primary Header is updated. Nov 8 00:19:40.379716 disk-uuid[566]: Secondary Entries is updated. Nov 8 00:19:40.379716 disk-uuid[566]: Secondary Header is updated. Nov 8 00:19:40.385589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:19:40.389889 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:19:40.491626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:19:40.507051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:19:40.527315 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:19:40.537897 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:19:40.537930 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:19:40.540234 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:19:40.540309 kernel: ata3.00: applying bridge limits Nov 8 00:19:40.541882 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:19:40.542949 kernel: ata3.00: configured for UDMA/100 Nov 8 00:19:40.545896 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:19:40.545928 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:19:40.547909 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:19:40.549000 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:19:40.602686 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:19:40.602994 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:19:40.618949 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:19:41.454892 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:19:41.455114 disk-uuid[570]: The operation has completed successfully. Nov 8 00:19:41.484745 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:19:41.484883 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:19:41.510990 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:19:41.515175 sh[596]: Success Nov 8 00:19:41.529928 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:19:41.564364 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:19:41.584497 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:19:41.587562 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:19:41.736903 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:19:41.736937 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:19:41.739744 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:19:41.739765 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:19:41.741064 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:19:41.746924 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:19:41.747791 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:19:41.764115 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:19:41.768332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:19:41.780607 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:41.780641 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:19:41.780656 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:19:41.785903 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:19:41.796904 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:19:41.800067 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:41.875978 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:19:41.887089 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:19:41.908020 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:19:41.920050 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:19:41.992198 systemd-networkd[777]: lo: Link UP Nov 8 00:19:41.992946 systemd-networkd[777]: lo: Gained carrier Nov 8 00:19:41.996643 systemd-networkd[777]: Enumeration completed Nov 8 00:19:41.997240 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:19:41.997244 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:19:41.997505 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:19:41.998280 systemd-networkd[777]: eth0: Link UP Nov 8 00:19:41.998284 systemd-networkd[777]: eth0: Gained carrier Nov 8 00:19:41.998291 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:19:42.018834 systemd[1]: Reached target network.target - Network. Nov 8 00:19:42.048951 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:19:42.049317 ignition[760]: Ignition 2.19.0 Nov 8 00:19:42.049325 ignition[760]: Stage: fetch-offline Nov 8 00:19:42.049390 ignition[760]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:42.049401 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:19:42.049526 ignition[760]: parsed url from cmdline: "" Nov 8 00:19:42.049530 ignition[760]: no config URL provided Nov 8 00:19:42.049535 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:19:42.049545 ignition[760]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:19:42.049579 ignition[760]: op(1): [started] loading QEMU firmware config module Nov 8 00:19:42.049585 ignition[760]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:19:42.060435 ignition[760]: op(1): [finished] loading QEMU firmware config module Nov 8 00:19:42.060465 ignition[760]: QEMU firmware config was not found. Ignoring... Nov 8 00:19:42.176895 ignition[760]: parsing config with SHA512: 0745da0fc3b2b094902ba154a4008dd72c75f73f5d73399681a216b5dc0d09d0423c00ef42ff8929b99afdaad14e84d2c49a9ede127427e01bba32bf7f33b978 Nov 8 00:19:42.188335 unknown[760]: fetched base config from "system" Nov 8 00:19:42.188423 unknown[760]: fetched user config from "qemu" Nov 8 00:19:42.189415 ignition[760]: fetch-offline: fetch-offline passed Nov 8 00:19:42.189668 ignition[760]: Ignition finished successfully Nov 8 00:19:42.197600 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:19:42.198646 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:19:42.210323 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:19:42.243748 ignition[789]: Ignition 2.19.0 Nov 8 00:19:42.243780 ignition[789]: Stage: kargs Nov 8 00:19:42.244247 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:42.244269 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:19:42.250807 ignition[789]: kargs: kargs passed Nov 8 00:19:42.250885 ignition[789]: Ignition finished successfully Nov 8 00:19:42.256552 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:19:42.271297 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:19:42.289610 ignition[798]: Ignition 2.19.0 Nov 8 00:19:42.289626 ignition[798]: Stage: disks Nov 8 00:19:42.289836 ignition[798]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:42.289851 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:19:42.290833 ignition[798]: disks: disks passed Nov 8 00:19:42.290914 ignition[798]: Ignition finished successfully Nov 8 00:19:42.299518 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:19:42.300596 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:19:42.303351 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:19:42.303883 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:19:42.305401 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:19:42.313785 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:19:42.331101 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:19:42.350365 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:19:42.358437 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:19:42.366128 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:19:42.461903 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:19:42.462897 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:19:42.466547 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:19:42.477003 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:19:42.479859 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:19:42.487367 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Nov 8 00:19:42.487411 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:42.487425 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:19:42.487438 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:19:42.482805 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:19:42.498637 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:19:42.482881 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:19:42.482916 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:19:42.496373 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:19:42.499700 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:19:42.511027 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:19:42.552616 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:19:42.557996 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:19:42.562563 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:19:42.567052 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:19:42.666024 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:19:42.676020 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:19:42.678728 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:19:42.686700 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:19:42.690810 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:42.712279 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:19:42.718143 ignition[932]: INFO : Ignition 2.19.0 Nov 8 00:19:42.718143 ignition[932]: INFO : Stage: mount Nov 8 00:19:42.720698 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:42.720698 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:19:42.720698 ignition[932]: INFO : mount: mount passed Nov 8 00:19:42.720698 ignition[932]: INFO : Ignition finished successfully Nov 8 00:19:42.728718 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:19:42.735035 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:19:42.744679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:19:42.757888 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Nov 8 00:19:42.761354 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:19:42.761391 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:19:42.761402 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:19:42.765894 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:19:42.767633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:19:42.795093 ignition[962]: INFO : Ignition 2.19.0 Nov 8 00:19:42.795093 ignition[962]: INFO : Stage: files Nov 8 00:19:42.797739 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:42.797739 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:19:42.797739 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:19:42.797739 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:19:42.797739 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:19:42.808658 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:19:42.808658 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:19:42.808658 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:19:42.808658 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:19:42.808658 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:19:42.800926 unknown[962]: wrote ssh authorized keys file for user: core Nov 8 00:19:42.846768 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:19:42.895882 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:19:42.907293 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:19:43.340854 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:19:43.556103 systemd-networkd[777]: eth0: Gained IPv6LL Nov 8 00:19:44.005069 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:19:44.005069 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:19:44.011101 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:19:44.014643 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:19:44.014643 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:19:44.014643 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:19:44.021685 ignition[962]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:19:44.024811 ignition[962]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:19:44.024811 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:19:44.024811 ignition[962]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:19:44.054451 ignition[962]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:19:44.065435 ignition[962]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:19:44.068342 ignition[962]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:19:44.068342 ignition[962]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:19:44.068342 ignition[962]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:19:44.068342 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:19:44.068342 ignition[962]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:19:44.068342 ignition[962]: INFO : files: files passed Nov 8 00:19:44.068342 ignition[962]: INFO : Ignition finished successfully Nov 8 00:19:44.089659 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:19:44.103013 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:19:44.106638 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:19:44.108217 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:19:44.108342 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:19:44.136303 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:19:44.141422 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:19:44.141422 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:19:44.146697 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:19:44.151317 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:19:44.152556 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:19:44.168022 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:19:44.193720 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:19:44.193880 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:19:44.195317 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:19:44.199921 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:19:44.203376 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:19:44.204189 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:19:44.225263 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:19:44.243035 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:19:44.255591 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:19:44.256362 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:19:44.260567 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:19:44.264332 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:19:44.264440 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:19:44.270185 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:19:44.271352 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:19:44.275643 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:19:44.278420 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:19:44.281771 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:19:44.285542 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:19:44.289274 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:19:44.292418 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:19:44.296450 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:19:44.299632 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:19:44.302742 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:19:44.302851 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:19:44.307877 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:19:44.308733 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:19:44.313234 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:19:44.313351 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:19:44.316930 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:19:44.317070 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:19:44.323580 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:19:44.323716 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:19:44.326971 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:19:44.327742 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:19:44.330921 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:19:44.332354 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:19:44.335692 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:19:44.338483 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:19:44.338585 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:19:44.341562 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:19:44.341664 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:19:44.345417 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:19:44.345544 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:19:44.348556 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:19:44.348675 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:19:44.368044 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:19:44.368695 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:19:44.368836 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:19:44.376635 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:19:44.379613 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:19:44.381401 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:19:44.385546 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:19:44.387414 ignition[1016]: INFO : Ignition 2.19.0 Nov 8 00:19:44.387414 ignition[1016]: INFO : Stage: umount Nov 8 00:19:44.387414 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:19:44.387414 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:19:44.387414 ignition[1016]: INFO : umount: umount passed Nov 8 00:19:44.387414 ignition[1016]: INFO : Ignition finished successfully Nov 8 00:19:44.387606 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:19:44.402498 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:19:44.404684 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:19:44.406399 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:19:44.411661 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:19:44.413206 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:19:44.418032 systemd[1]: Stopped target network.target - Network. Nov 8 00:19:44.420918 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:19:44.422415 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:19:44.425680 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:19:44.425742 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:19:44.430321 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:19:44.430383 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:19:44.435060 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:19:44.435121 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:19:44.440271 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:19:44.443746 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:19:44.447916 systemd-networkd[777]: eth0: DHCPv6 lease lost Nov 8 00:19:44.449972 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:19:44.451588 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:19:44.455365 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:19:44.457082 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:19:44.462209 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:19:44.462281 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:19:44.475987 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:19:44.477468 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:19:44.477540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:19:44.481048 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:19:44.481101 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:19:44.484467 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:19:44.484520 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:19:44.486391 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:19:44.486443 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:19:44.490665 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:19:44.511878 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:19:44.513629 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:19:44.518445 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:19:44.520207 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:19:44.524623 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:19:44.529497 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:19:44.533475 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:19:44.533534 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:19:44.538705 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:19:44.540272 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:19:44.544068 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:19:44.544132 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:19:44.549670 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:19:44.551295 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:19:44.570015 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:19:44.573591 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:19:44.575294 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:19:44.579335 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:19:44.579391 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:19:44.584844 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:19:44.586385 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:19:44.589716 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:19:44.591516 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:19:44.596396 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:19:44.599716 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:19:44.599782 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:19:44.615072 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:19:44.627058 systemd[1]: Switching root. Nov 8 00:19:44.666099 systemd-journald[193]: Journal stopped Nov 8 00:19:46.212019 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 8 00:19:46.212145 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:19:46.212189 kernel: SELinux: policy capability open_perms=1 Nov 8 00:19:46.212226 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:19:46.212255 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:19:46.212277 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:19:46.212298 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:19:46.212309 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:19:46.212321 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:19:46.212337 kernel: audit: type=1403 audit(1762561185.190:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:19:46.212351 systemd[1]: Successfully loaded SELinux policy in 42.560ms. Nov 8 00:19:46.212372 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.212ms. Nov 8 00:19:46.212386 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:19:46.212399 systemd[1]: Detected virtualization kvm. Nov 8 00:19:46.212411 systemd[1]: Detected architecture x86-64. Nov 8 00:19:46.212422 systemd[1]: Detected first boot. Nov 8 00:19:46.212435 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:19:46.212453 zram_generator::config[1061]: No configuration found. Nov 8 00:19:46.212469 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:19:46.212482 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:19:46.212494 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:19:46.212506 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:19:46.212519 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:19:46.212532 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:19:46.212543 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:19:46.212556 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:19:46.212571 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:19:46.212584 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:19:46.212596 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:19:46.212608 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:19:46.212620 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:19:46.212633 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:19:46.212645 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:19:46.212658 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:19:46.212673 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:19:46.212686 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:19:46.212698 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:19:46.212710 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:19:46.212726 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:19:46.212738 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:19:46.212750 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:19:46.212762 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:19:46.212777 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:19:46.212790 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:19:46.212804 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:19:46.212816 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:19:46.212830 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:19:46.212843 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:19:46.212857 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:19:46.213227 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:19:46.213241 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:19:46.213253 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:19:46.213269 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:19:46.213282 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:19:46.213294 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:19:46.213306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:19:46.213318 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:19:46.213330 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:19:46.213342 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:19:46.213356 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:19:46.213371 systemd[1]: Reached target machines.target - Containers. Nov 8 00:19:46.213383 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:19:46.213395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:19:46.213407 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:19:46.213419 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:19:46.213431 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:19:46.213446 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:19:46.213472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:19:46.213496 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:19:46.213508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:19:46.213527 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:19:46.213539 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:19:46.213551 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:19:46.213563 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:19:46.213575 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:19:46.213587 kernel: fuse: init (API version 7.39) Nov 8 00:19:46.213599 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:19:46.213614 kernel: loop: module loaded Nov 8 00:19:46.213631 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:19:46.213643 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:19:46.213656 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:19:46.213678 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:19:46.213743 systemd-journald[1135]: Collecting audit messages is disabled. Nov 8 00:19:46.213783 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:19:46.213796 systemd-journald[1135]: Journal started Nov 8 00:19:46.213841 systemd-journald[1135]: Runtime Journal (/run/log/journal/f04d2773bd774d8fa1ef4bdc75f6deae) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:19:45.823109 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:19:45.841977 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:19:45.842472 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:19:46.215923 systemd[1]: Stopped verity-setup.service. Nov 8 00:19:46.215951 kernel: ACPI: bus type drm_connector registered Nov 8 00:19:46.221927 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:19:46.230028 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:19:46.231031 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:19:46.232916 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:19:46.234874 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:19:46.236665 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:19:46.238651 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:19:46.240609 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:19:46.242571 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:19:46.244820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:19:46.247436 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:19:46.247627 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:19:46.249938 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:19:46.250127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:19:46.252414 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:19:46.252598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:19:46.254656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:19:46.254831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:19:46.257213 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:19:46.257391 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:19:46.259648 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:19:46.259826 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:19:46.261959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:19:46.264283 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:19:46.266635 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:19:46.284199 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:19:46.294957 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:19:46.298237 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:19:46.300203 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:19:46.300307 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:19:46.303089 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:19:46.306334 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:19:46.309548 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:19:46.311435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:19:46.313992 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:19:46.323075 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:19:46.325175 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:19:46.327226 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:19:46.328995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:19:46.334277 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:19:46.337992 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:19:46.369814 systemd-journald[1135]: Time spent on flushing to /var/log/journal/f04d2773bd774d8fa1ef4bdc75f6deae is 16.843ms for 948 entries. Nov 8 00:19:46.369814 systemd-journald[1135]: System Journal (/var/log/journal/f04d2773bd774d8fa1ef4bdc75f6deae) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:19:46.430549 systemd-journald[1135]: Received client request to flush runtime journal. Nov 8 00:19:46.429395 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:19:46.434358 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:19:46.437184 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:19:46.439770 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:19:46.442638 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:19:46.445638 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:19:46.449898 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:19:46.448418 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:19:46.460006 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:19:46.471111 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:19:46.475082 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:19:46.478878 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:19:46.480005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:19:46.488359 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:19:46.504203 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:19:46.508828 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:19:46.516961 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 00:19:46.515553 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:19:46.518989 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:19:46.545603 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Nov 8 00:19:46.545625 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Nov 8 00:19:46.556002 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:19:46.609921 kernel: loop2: detected capacity change from 0 to 219144 Nov 8 00:19:46.642903 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 00:19:46.658901 kernel: loop4: detected capacity change from 0 to 142488 Nov 8 00:19:46.671885 kernel: loop5: detected capacity change from 0 to 219144 Nov 8 00:19:46.680550 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 8 00:19:46.682213 (sd-merge)[1200]: Merged extensions into '/usr'. Nov 8 00:19:46.688474 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:19:46.688497 systemd[1]: Reloading... Nov 8 00:19:46.796907 zram_generator::config[1227]: No configuration found. Nov 8 00:19:46.875166 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:19:46.957087 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:19:47.008748 systemd[1]: Reloading finished in 319 ms. Nov 8 00:19:47.045968 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:19:47.048343 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:19:47.065191 systemd[1]: Starting ensure-sysext.service... Nov 8 00:19:47.068701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:19:47.075499 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:19:47.075518 systemd[1]: Reloading... Nov 8 00:19:47.107368 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:19:47.107785 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:19:47.108902 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:19:47.109231 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 8 00:19:47.109328 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Nov 8 00:19:47.112845 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:19:47.112858 systemd-tmpfiles[1264]: Skipping /boot Nov 8 00:19:47.147183 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:19:47.147200 systemd-tmpfiles[1264]: Skipping /boot Nov 8 00:19:47.172926 zram_generator::config[1294]: No configuration found. Nov 8 00:19:47.285293 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:19:47.336618 systemd[1]: Reloading finished in 260 ms. Nov 8 00:19:47.358096 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:19:47.371441 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:19:47.391248 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:19:47.395190 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:19:47.398717 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:19:47.404165 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:19:47.411928 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:19:47.418964 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:19:47.423446 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:19:47.424006 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:19:47.425396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:19:47.431147 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:19:47.438993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:19:47.440687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:19:47.445933 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:19:47.447543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:19:47.448682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:19:47.448923 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:19:47.451409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:19:47.451593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:19:47.454255 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:19:47.454433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:19:47.460464 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Nov 8 00:19:47.462492 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:19:47.465168 augenrules[1356]: No rules Nov 8 00:19:47.465408 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:19:47.472270 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:19:47.478051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:19:47.478346 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:19:47.486228 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:19:47.489741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:19:47.496164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:19:47.498294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:19:47.508652 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:19:47.510498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:19:47.511327 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:19:47.514277 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:19:47.517559 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:19:47.517779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:19:47.520329 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:19:47.522709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:19:47.522937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:19:47.525646 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:19:47.525830 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:19:47.528503 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:19:47.555177 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:19:47.555389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:19:47.561137 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:19:47.565755 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:19:47.638983 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:19:47.646021 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:19:47.647898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:19:47.653577 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:19:47.657613 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:19:47.657635 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:19:47.658312 systemd[1]: Finished ensure-sysext.service. Nov 8 00:19:47.660493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:19:47.660676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:19:47.665599 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:19:47.665781 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:19:47.667934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:19:47.668111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:19:47.672480 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:19:47.672661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:19:47.678663 systemd-resolved[1334]: Positive Trust Anchors: Nov 8 00:19:47.678999 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:19:47.679034 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:19:47.683307 systemd-resolved[1334]: Defaulting to hostname 'linux'. Nov 8 00:19:47.687258 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:19:47.697505 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:19:47.705923 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1398) Nov 8 00:19:47.708596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:19:47.710722 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:19:47.710794 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:19:47.720103 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:19:47.728606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:19:47.731368 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:19:47.732086 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:19:47.735975 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:19:47.736210 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:19:47.740093 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:19:47.744957 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:19:47.751268 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:19:47.759078 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:19:47.777764 systemd-networkd[1407]: lo: Link UP Nov 8 00:19:47.778647 systemd-networkd[1407]: lo: Gained carrier Nov 8 00:19:47.781501 systemd-networkd[1407]: Enumeration completed Nov 8 00:19:47.782410 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:19:47.783236 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:19:47.783310 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:19:47.784144 systemd-networkd[1407]: eth0: Link UP Nov 8 00:19:47.784505 systemd[1]: Reached target network.target - Network. Nov 8 00:19:47.785429 systemd-networkd[1407]: eth0: Gained carrier Nov 8 00:19:47.785494 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:19:47.795107 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:19:47.839654 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:19:47.847440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:19:47.860382 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:19:47.865084 systemd-timesyncd[1418]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:19:47.865122 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:19:47.865145 systemd-timesyncd[1418]: Initial clock synchronization to Sat 2025-11-08 00:19:47.929829 UTC. Nov 8 00:19:47.929902 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:19:47.944531 kernel: kvm_amd: TSC scaling supported Nov 8 00:19:47.944597 kernel: kvm_amd: Nested Virtualization enabled Nov 8 00:19:47.944616 kernel: kvm_amd: Nested Paging enabled Nov 8 00:19:47.945378 kernel: kvm_amd: LBR virtualization supported Nov 8 00:19:47.946329 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 8 00:19:47.947367 kernel: kvm_amd: Virtual GIF supported Nov 8 00:19:47.973903 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:19:48.019649 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:19:48.064300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:19:48.085136 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:19:48.097271 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:19:48.130808 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:19:48.133393 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:19:48.135275 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:19:48.137198 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:19:48.139300 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:19:48.141628 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:19:48.143507 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:19:48.145601 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:19:48.147732 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:19:48.147761 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:19:48.149281 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:19:48.151818 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:19:48.155490 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:19:48.166145 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:19:48.169353 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:19:48.171810 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:19:48.173861 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:19:48.175621 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:19:48.177220 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:19:48.177255 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:19:48.178441 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:19:48.181470 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:19:48.183975 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:19:48.186988 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:19:48.190661 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:19:48.192545 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:19:48.194254 jq[1443]: false Nov 8 00:19:48.194552 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:19:48.200041 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:19:48.203083 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:19:48.206618 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:19:48.213184 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:19:48.215736 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:19:48.216373 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:19:48.217328 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:19:48.220507 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:19:48.223996 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:19:48.230008 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:19:48.232244 jq[1456]: true Nov 8 00:19:48.230422 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:19:48.230815 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:19:48.231386 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:19:48.235700 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:19:48.237198 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:19:48.248290 extend-filesystems[1444]: Found loop3 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found loop4 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found loop5 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found sr0 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found vda Nov 8 00:19:48.250093 extend-filesystems[1444]: Found vda1 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found vda2 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found vda3 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found usr Nov 8 00:19:48.250093 extend-filesystems[1444]: Found vda4 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found vda6 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found vda7 Nov 8 00:19:48.250093 extend-filesystems[1444]: Found vda9 Nov 8 00:19:48.250093 extend-filesystems[1444]: Checking size of /dev/vda9 Nov 8 00:19:48.283471 extend-filesystems[1444]: Resized partition /dev/vda9 Nov 8 00:19:48.293274 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 8 00:19:48.293312 update_engine[1455]: I20251108 00:19:48.260146 1455 main.cc:92] Flatcar Update Engine starting Nov 8 00:19:48.293312 update_engine[1455]: I20251108 00:19:48.274431 1455 update_check_scheduler.cc:74] Next update check in 4m27s Nov 8 00:19:48.268192 dbus-daemon[1442]: [system] SELinux support is enabled Nov 8 00:19:48.270416 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:19:48.321206 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:19:48.329116 tar[1459]: linux-amd64/LICENSE Nov 8 00:19:48.329116 tar[1459]: linux-amd64/helm Nov 8 00:19:48.329769 jq[1460]: true Nov 8 00:19:48.270942 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:19:48.284578 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:19:48.284606 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:19:48.287505 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:19:48.287524 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:19:48.289740 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:19:48.297064 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:19:48.320498 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:19:48.320536 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:19:48.322903 systemd-logind[1454]: New seat seat0. Nov 8 00:19:48.324812 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:19:48.346779 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 8 00:19:48.351055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1372) Nov 8 00:19:48.373173 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:19:48.373173 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:19:48.373173 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 8 00:19:48.402080 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Nov 8 00:19:48.375744 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:19:48.377033 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:19:48.427665 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:19:48.426299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:19:48.429175 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:19:48.432112 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:19:48.460116 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:19:48.495188 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:19:48.505437 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:19:48.512476 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:19:48.512833 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:19:48.536209 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:19:48.619723 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:19:48.636223 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:19:48.640080 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:19:48.642080 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:19:48.747430 containerd[1462]: time="2025-11-08T00:19:48.747322344Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:19:48.801483 containerd[1462]: time="2025-11-08T00:19:48.801364602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:19:48.804174 containerd[1462]: time="2025-11-08T00:19:48.804114037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:19:48.804245 containerd[1462]: time="2025-11-08T00:19:48.804178091Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:19:48.804245 containerd[1462]: time="2025-11-08T00:19:48.804199918Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:19:48.804462 containerd[1462]: time="2025-11-08T00:19:48.804431460Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:19:48.804501 containerd[1462]: time="2025-11-08T00:19:48.804469023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:19:48.804637 containerd[1462]: time="2025-11-08T00:19:48.804615403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:19:48.804637 containerd[1462]: time="2025-11-08T00:19:48.804634189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:19:48.804909 containerd[1462]: time="2025-11-08T00:19:48.804862067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:19:48.804944 containerd[1462]: time="2025-11-08T00:19:48.804906840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:19:48.804944 containerd[1462]: time="2025-11-08T00:19:48.804932221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:19:48.804996 containerd[1462]: time="2025-11-08T00:19:48.804944270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:19:48.805077 containerd[1462]: time="2025-11-08T00:19:48.805058078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:19:48.805371 containerd[1462]: time="2025-11-08T00:19:48.805341192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:19:48.805517 containerd[1462]: time="2025-11-08T00:19:48.805494400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:19:48.805517 containerd[1462]: time="2025-11-08T00:19:48.805512772Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:19:48.805640 containerd[1462]: time="2025-11-08T00:19:48.805620842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:19:48.805711 containerd[1462]: time="2025-11-08T00:19:48.805694734Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:19:48.931749 containerd[1462]: time="2025-11-08T00:19:48.931682337Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:19:48.931749 containerd[1462]: time="2025-11-08T00:19:48.931762027Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:19:48.931947 containerd[1462]: time="2025-11-08T00:19:48.931779590Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:19:48.931947 containerd[1462]: time="2025-11-08T00:19:48.931848867Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:19:48.931947 containerd[1462]: time="2025-11-08T00:19:48.931881096Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:19:48.932112 containerd[1462]: time="2025-11-08T00:19:48.932091066Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:19:48.932425 containerd[1462]: time="2025-11-08T00:19:48.932392643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:19:48.932565 containerd[1462]: time="2025-11-08T00:19:48.932546739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:19:48.932609 containerd[1462]: time="2025-11-08T00:19:48.932573828Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:19:48.932609 containerd[1462]: time="2025-11-08T00:19:48.932595230Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:19:48.932669 containerd[1462]: time="2025-11-08T00:19:48.932613703Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:19:48.932669 containerd[1462]: time="2025-11-08T00:19:48.932630459Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:19:48.932669 containerd[1462]: time="2025-11-08T00:19:48.932648356Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:19:48.932669 containerd[1462]: time="2025-11-08T00:19:48.932667344Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:19:48.932775 containerd[1462]: time="2025-11-08T00:19:48.932705199Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:19:48.932775 containerd[1462]: time="2025-11-08T00:19:48.932726430Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:19:48.932775 containerd[1462]: time="2025-11-08T00:19:48.932745246Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:19:48.932775 containerd[1462]: time="2025-11-08T00:19:48.932764598Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:19:48.932898 containerd[1462]: time="2025-11-08T00:19:48.932804998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.932898 containerd[1462]: time="2025-11-08T00:19:48.932823673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.932898 containerd[1462]: time="2025-11-08T00:19:48.932835217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.932898 containerd[1462]: time="2025-11-08T00:19:48.932847832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.932898 containerd[1462]: time="2025-11-08T00:19:48.932859841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.932898 containerd[1462]: time="2025-11-08T00:19:48.932897554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.932912331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.932930460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.932947540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.932967740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.932981950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.933022533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.933042480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.933062438Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.933088708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.933100485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.933113029Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.933181507Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.933208121Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:19:48.933281 containerd[1462]: time="2025-11-08T00:19:48.933219210Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:19:48.933608 containerd[1462]: time="2025-11-08T00:19:48.933231017Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:19:48.933608 containerd[1462]: time="2025-11-08T00:19:48.933240775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.933608 containerd[1462]: time="2025-11-08T00:19:48.933258096Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:19:48.933608 containerd[1462]: time="2025-11-08T00:19:48.933279639Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:19:48.933608 containerd[1462]: time="2025-11-08T00:19:48.933291042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:19:48.935402 containerd[1462]: time="2025-11-08T00:19:48.934541731Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:19:48.935402 containerd[1462]: time="2025-11-08T00:19:48.934851650Z" level=info msg="Connect containerd service" Nov 8 00:19:48.935402 containerd[1462]: time="2025-11-08T00:19:48.934957762Z" level=info msg="using legacy CRI server" Nov 8 00:19:48.935402 containerd[1462]: time="2025-11-08T00:19:48.934981234Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:19:48.935402 containerd[1462]: time="2025-11-08T00:19:48.935286114Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:19:48.936749 containerd[1462]: time="2025-11-08T00:19:48.936716745Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:19:48.937028 containerd[1462]: time="2025-11-08T00:19:48.936948117Z" level=info msg="Start subscribing containerd event" Nov 8 00:19:48.937194 containerd[1462]: time="2025-11-08T00:19:48.937167622Z" level=info msg="Start recovering state" Nov 8 00:19:48.937337 containerd[1462]: time="2025-11-08T00:19:48.937313708Z" level=info msg="Start event monitor" Nov 8 00:19:48.937367 containerd[1462]: time="2025-11-08T00:19:48.937343049Z" level=info msg="Start snapshots syncer" Nov 8 00:19:48.937407 containerd[1462]: time="2025-11-08T00:19:48.937383176Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:19:48.937407 containerd[1462]: time="2025-11-08T00:19:48.937401356Z" level=info msg="Start streaming server" Nov 8 00:19:48.937685 containerd[1462]: time="2025-11-08T00:19:48.937646757Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:19:48.937816 containerd[1462]: time="2025-11-08T00:19:48.937726356Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:19:48.937816 containerd[1462]: time="2025-11-08T00:19:48.937810509Z" level=info msg="containerd successfully booted in 0.191663s" Nov 8 00:19:48.938033 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:19:48.963070 tar[1459]: linux-amd64/README.md Nov 8 00:19:48.981139 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:19:49.826534 systemd-networkd[1407]: eth0: Gained IPv6LL Nov 8 00:19:49.830755 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:19:49.833494 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:19:49.845212 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:19:49.849157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:19:49.852342 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:19:49.876958 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:19:49.877272 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:19:49.879739 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:19:49.883191 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:19:50.632413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:19:50.634950 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:19:50.637943 systemd[1]: Startup finished in 1.125s (kernel) + 6.309s (initrd) + 5.488s (userspace) = 12.922s. Nov 8 00:19:50.662440 (kubelet)[1555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:19:51.315836 kubelet[1555]: E1108 00:19:51.315744 1555 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:19:51.320070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:19:51.320305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:19:51.320707 systemd[1]: kubelet.service: Consumed 1.284s CPU time. Nov 8 00:19:52.417881 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:19:52.419437 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:43944.service - OpenSSH per-connection server daemon (10.0.0.1:43944). Nov 8 00:19:52.464216 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 43944 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:19:52.466604 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:52.477020 systemd-logind[1454]: New session 1 of user core. Nov 8 00:19:52.478636 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:19:52.490155 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:19:52.503483 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:19:52.506968 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:19:52.515270 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:19:52.635158 systemd[1572]: Queued start job for default target default.target. Nov 8 00:19:52.647319 systemd[1572]: Created slice app.slice - User Application Slice. Nov 8 00:19:52.647349 systemd[1572]: Reached target paths.target - Paths. Nov 8 00:19:52.647363 systemd[1572]: Reached target timers.target - Timers. Nov 8 00:19:52.649113 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:19:52.661534 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:19:52.661684 systemd[1572]: Reached target sockets.target - Sockets. Nov 8 00:19:52.661701 systemd[1572]: Reached target basic.target - Basic System. Nov 8 00:19:52.661742 systemd[1572]: Reached target default.target - Main User Target. Nov 8 00:19:52.661782 systemd[1572]: Startup finished in 138ms. Nov 8 00:19:52.662298 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:19:52.664016 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:19:52.727397 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:43954.service - OpenSSH per-connection server daemon (10.0.0.1:43954). Nov 8 00:19:52.761333 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 43954 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:19:52.763074 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:52.767841 systemd-logind[1454]: New session 2 of user core. Nov 8 00:19:52.778105 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:19:52.834803 sshd[1583]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:52.849534 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:43954.service: Deactivated successfully. Nov 8 00:19:52.852305 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:19:52.854461 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:19:52.866361 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:43970.service - OpenSSH per-connection server daemon (10.0.0.1:43970). Nov 8 00:19:52.867604 systemd-logind[1454]: Removed session 2. Nov 8 00:19:52.897421 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 43970 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:19:52.899291 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:52.903644 systemd-logind[1454]: New session 3 of user core. Nov 8 00:19:52.914027 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:19:52.965062 sshd[1590]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:52.978275 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:43970.service: Deactivated successfully. Nov 8 00:19:52.980720 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:19:52.982860 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:19:52.998171 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:43980.service - OpenSSH per-connection server daemon (10.0.0.1:43980). Nov 8 00:19:52.999396 systemd-logind[1454]: Removed session 3. Nov 8 00:19:53.027693 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 43980 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:19:53.029378 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:53.033977 systemd-logind[1454]: New session 4 of user core. Nov 8 00:19:53.048001 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:19:53.106672 sshd[1597]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:53.117524 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:43980.service: Deactivated successfully. Nov 8 00:19:53.119263 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:19:53.121229 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:19:53.122470 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:43982.service - OpenSSH per-connection server daemon (10.0.0.1:43982). Nov 8 00:19:53.123380 systemd-logind[1454]: Removed session 4. Nov 8 00:19:53.158027 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 43982 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:19:53.160115 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:53.165184 systemd-logind[1454]: New session 5 of user core. Nov 8 00:19:53.175012 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:19:53.234038 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:19:53.234379 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:19:53.250109 sudo[1607]: pam_unix(sudo:session): session closed for user root Nov 8 00:19:53.252332 sshd[1604]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:53.265102 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:43982.service: Deactivated successfully. Nov 8 00:19:53.267201 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:19:53.269119 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:19:53.288312 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:43992.service - OpenSSH per-connection server daemon (10.0.0.1:43992). Nov 8 00:19:53.289256 systemd-logind[1454]: Removed session 5. Nov 8 00:19:53.318275 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 43992 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:19:53.320069 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:53.324340 systemd-logind[1454]: New session 6 of user core. Nov 8 00:19:53.334002 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:19:53.390234 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:19:53.390610 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:19:53.395159 sudo[1616]: pam_unix(sudo:session): session closed for user root Nov 8 00:19:53.404163 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:19:53.404628 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:19:53.427289 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:19:53.428931 auditctl[1619]: No rules Nov 8 00:19:53.430485 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:19:53.430783 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:19:53.432899 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:19:53.466963 augenrules[1637]: No rules Nov 8 00:19:53.469797 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:19:53.473080 sudo[1615]: pam_unix(sudo:session): session closed for user root Nov 8 00:19:53.475094 sshd[1612]: pam_unix(sshd:session): session closed for user core Nov 8 00:19:53.493966 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:43992.service: Deactivated successfully. Nov 8 00:19:53.496954 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:19:53.499270 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:19:53.510461 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:44002.service - OpenSSH per-connection server daemon (10.0.0.1:44002). Nov 8 00:19:53.512205 systemd-logind[1454]: Removed session 6. Nov 8 00:19:53.541157 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 44002 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:19:53.542961 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:19:53.547549 systemd-logind[1454]: New session 7 of user core. Nov 8 00:19:53.556998 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:19:53.618473 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:19:53.619152 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:19:54.087102 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:19:54.087302 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:19:54.620012 dockerd[1665]: time="2025-11-08T00:19:54.619927349Z" level=info msg="Starting up" Nov 8 00:19:55.340583 dockerd[1665]: time="2025-11-08T00:19:55.340500980Z" level=info msg="Loading containers: start." Nov 8 00:19:55.459904 kernel: Initializing XFRM netlink socket Nov 8 00:19:55.543458 systemd-networkd[1407]: docker0: Link UP Nov 8 00:19:55.568527 dockerd[1665]: time="2025-11-08T00:19:55.568464863Z" level=info msg="Loading containers: done." Nov 8 00:19:55.687089 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck341201921-merged.mount: Deactivated successfully. Nov 8 00:19:55.689224 dockerd[1665]: time="2025-11-08T00:19:55.689166294Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:19:55.689539 dockerd[1665]: time="2025-11-08T00:19:55.689351880Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:19:55.689539 dockerd[1665]: time="2025-11-08T00:19:55.689512570Z" level=info msg="Daemon has completed initialization" Nov 8 00:19:55.736703 dockerd[1665]: time="2025-11-08T00:19:55.736588260Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:19:55.736947 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:19:56.353035 containerd[1462]: time="2025-11-08T00:19:56.352964389Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 8 00:19:57.147088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444108595.mount: Deactivated successfully. Nov 8 00:19:58.375698 containerd[1462]: time="2025-11-08T00:19:58.375639347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:19:58.376314 containerd[1462]: time="2025-11-08T00:19:58.376244729Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 8 00:19:58.377554 containerd[1462]: time="2025-11-08T00:19:58.377518675Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:19:58.380765 containerd[1462]: time="2025-11-08T00:19:58.380713878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:19:58.381809 containerd[1462]: time="2025-11-08T00:19:58.381782382Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.028759235s" Nov 8 00:19:58.381856 containerd[1462]: time="2025-11-08T00:19:58.381820846Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 8 00:19:58.382564 containerd[1462]: time="2025-11-08T00:19:58.382538688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 8 00:19:59.644290 containerd[1462]: time="2025-11-08T00:19:59.644211198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:19:59.644977 containerd[1462]: time="2025-11-08T00:19:59.644884785Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 8 00:19:59.646195 containerd[1462]: time="2025-11-08T00:19:59.646157970Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:19:59.649095 containerd[1462]: time="2025-11-08T00:19:59.649046611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:19:59.650301 containerd[1462]: time="2025-11-08T00:19:59.650263325Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.267693065s" Nov 8 00:19:59.650344 containerd[1462]: time="2025-11-08T00:19:59.650301037Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 8 00:19:59.650848 containerd[1462]: time="2025-11-08T00:19:59.650817144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 8 00:20:00.852600 containerd[1462]: time="2025-11-08T00:20:00.852515733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:00.853327 containerd[1462]: time="2025-11-08T00:20:00.853251761Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 8 00:20:00.854526 containerd[1462]: time="2025-11-08T00:20:00.854482151Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:00.857534 containerd[1462]: time="2025-11-08T00:20:00.857488934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:00.858957 containerd[1462]: time="2025-11-08T00:20:00.858895983Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.208043872s" Nov 8 00:20:00.858957 containerd[1462]: time="2025-11-08T00:20:00.858946801Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 8 00:20:00.859590 containerd[1462]: time="2025-11-08T00:20:00.859562026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:20:01.407538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:20:01.440146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:01.752962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:01.757826 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:02.362503 kubelet[1885]: E1108 00:20:02.362350 1885 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:02.369914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:02.370212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:02.916236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075192426.mount: Deactivated successfully. Nov 8 00:20:03.281504 containerd[1462]: time="2025-11-08T00:20:03.281304082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:03.282356 containerd[1462]: time="2025-11-08T00:20:03.282324058Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 8 00:20:03.283409 containerd[1462]: time="2025-11-08T00:20:03.283372679Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:03.285744 containerd[1462]: time="2025-11-08T00:20:03.285703975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:03.286371 containerd[1462]: time="2025-11-08T00:20:03.286337314Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.426688258s" Nov 8 00:20:03.286414 containerd[1462]: time="2025-11-08T00:20:03.286369850Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 8 00:20:03.287471 containerd[1462]: time="2025-11-08T00:20:03.287429604Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 8 00:20:03.925416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457776174.mount: Deactivated successfully. Nov 8 00:20:05.494648 containerd[1462]: time="2025-11-08T00:20:05.494550487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:05.495491 containerd[1462]: time="2025-11-08T00:20:05.495400571Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 8 00:20:05.497139 containerd[1462]: time="2025-11-08T00:20:05.497092519Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:05.500523 containerd[1462]: time="2025-11-08T00:20:05.500481204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:05.501964 containerd[1462]: time="2025-11-08T00:20:05.501910429Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.214433798s" Nov 8 00:20:05.502020 containerd[1462]: time="2025-11-08T00:20:05.501963352Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 8 00:20:05.502759 containerd[1462]: time="2025-11-08T00:20:05.502699227Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 8 00:20:06.194379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704956873.mount: Deactivated successfully. Nov 8 00:20:06.226806 containerd[1462]: time="2025-11-08T00:20:06.226762641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:06.227810 containerd[1462]: time="2025-11-08T00:20:06.227701177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 8 00:20:06.228936 containerd[1462]: time="2025-11-08T00:20:06.228893175Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:06.231749 containerd[1462]: time="2025-11-08T00:20:06.231694065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:06.232894 containerd[1462]: time="2025-11-08T00:20:06.232816441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 730.068814ms" Nov 8 00:20:06.232970 containerd[1462]: time="2025-11-08T00:20:06.232916341Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 8 00:20:06.233533 containerd[1462]: time="2025-11-08T00:20:06.233507482Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 8 00:20:09.508451 containerd[1462]: time="2025-11-08T00:20:09.508376999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:09.509277 containerd[1462]: time="2025-11-08T00:20:09.509182324Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 8 00:20:09.510705 containerd[1462]: time="2025-11-08T00:20:09.510660901Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:09.515250 containerd[1462]: time="2025-11-08T00:20:09.515204167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:09.516731 containerd[1462]: time="2025-11-08T00:20:09.516682244Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.28306132s" Nov 8 00:20:09.516780 containerd[1462]: time="2025-11-08T00:20:09.516738126Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 8 00:20:12.407830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:20:12.425024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:12.645532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:12.651087 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:13.135396 kubelet[2033]: E1108 00:20:13.135278 2033 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:13.141136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:13.141481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:13.272761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:13.287505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:13.318563 systemd[1]: Reloading requested from client PID 2049 ('systemctl') (unit session-7.scope)... Nov 8 00:20:13.318580 systemd[1]: Reloading... Nov 8 00:20:13.421900 zram_generator::config[2088]: No configuration found. Nov 8 00:20:14.280223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:14.370177 systemd[1]: Reloading finished in 1051 ms. Nov 8 00:20:14.417177 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:20:14.417289 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:20:14.417636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:14.429260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:14.624501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:14.630466 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:20:16.064382 kubelet[2136]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:20:16.064382 kubelet[2136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:16.064845 kubelet[2136]: I1108 00:20:16.064441 2136 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:20:16.361545 kubelet[2136]: I1108 00:20:16.361385 2136 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:20:16.361545 kubelet[2136]: I1108 00:20:16.361430 2136 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:20:16.361545 kubelet[2136]: I1108 00:20:16.361477 2136 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:20:16.361545 kubelet[2136]: I1108 00:20:16.361494 2136 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:20:16.361778 kubelet[2136]: I1108 00:20:16.361748 2136 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:20:16.385680 kubelet[2136]: I1108 00:20:16.385639 2136 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:20:16.385850 kubelet[2136]: E1108 00:20:16.385670 2136 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:20:16.388977 kubelet[2136]: E1108 00:20:16.388929 2136 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:20:16.389049 kubelet[2136]: I1108 00:20:16.388992 2136 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:20:16.395834 kubelet[2136]: I1108 00:20:16.395789 2136 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:20:16.396783 kubelet[2136]: I1108 00:20:16.396737 2136 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:20:16.397000 kubelet[2136]: I1108 00:20:16.396766 2136 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:20:16.397000 kubelet[2136]: I1108 00:20:16.396999 2136 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:20:16.397176 kubelet[2136]: I1108 00:20:16.397010 2136 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:20:16.397176 kubelet[2136]: I1108 00:20:16.397143 2136 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:20:16.403284 kubelet[2136]: I1108 00:20:16.403235 2136 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:16.405023 kubelet[2136]: I1108 00:20:16.404977 2136 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:20:16.405101 kubelet[2136]: I1108 00:20:16.405046 2136 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:20:16.405175 kubelet[2136]: I1108 00:20:16.405143 2136 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:20:16.405175 kubelet[2136]: I1108 00:20:16.405169 2136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:20:16.411113 kubelet[2136]: E1108 00:20:16.410809 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:20:16.412852 kubelet[2136]: E1108 00:20:16.411352 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:20:16.413749 kubelet[2136]: I1108 00:20:16.413694 2136 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:20:16.414589 kubelet[2136]: I1108 00:20:16.414554 2136 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:20:16.414643 kubelet[2136]: I1108 00:20:16.414598 2136 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:20:16.414758 kubelet[2136]: W1108 00:20:16.414731 2136 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:20:16.419992 kubelet[2136]: I1108 00:20:16.419960 2136 server.go:1262] "Started kubelet" Nov 8 00:20:16.420450 kubelet[2136]: I1108 00:20:16.420370 2136 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:20:16.420450 kubelet[2136]: I1108 00:20:16.420448 2136 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:20:16.421032 kubelet[2136]: I1108 00:20:16.420997 2136 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:20:16.421161 kubelet[2136]: I1108 00:20:16.421130 2136 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:20:16.421211 kubelet[2136]: I1108 00:20:16.421174 2136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:20:16.424588 kubelet[2136]: I1108 00:20:16.423357 2136 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:20:16.425546 kubelet[2136]: I1108 00:20:16.425138 2136 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:20:16.425722 kubelet[2136]: E1108 00:20:16.424460 2136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0167f970de7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:20:16.419925479 +0000 UTC m=+1.785158573,LastTimestamp:2025-11-08 00:20:16.419925479 +0000 UTC m=+1.785158573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:20:16.425979 kubelet[2136]: E1108 00:20:16.425946 2136 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:20:16.426039 kubelet[2136]: I1108 00:20:16.426007 2136 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:20:16.426315 kubelet[2136]: I1108 00:20:16.426287 2136 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:20:16.426589 kubelet[2136]: I1108 00:20:16.426378 2136 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:20:16.427441 kubelet[2136]: E1108 00:20:16.427407 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Nov 8 00:20:16.427577 kubelet[2136]: E1108 00:20:16.427535 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:20:16.428263 kubelet[2136]: E1108 00:20:16.428192 2136 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:20:16.429269 kubelet[2136]: I1108 00:20:16.429226 2136 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:20:16.429269 kubelet[2136]: I1108 00:20:16.429262 2136 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:20:16.429399 kubelet[2136]: I1108 00:20:16.429358 2136 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:20:16.445314 kubelet[2136]: I1108 00:20:16.445222 2136 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:20:16.446935 kubelet[2136]: I1108 00:20:16.446911 2136 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:20:16.446935 kubelet[2136]: I1108 00:20:16.446926 2136 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:20:16.447015 kubelet[2136]: I1108 00:20:16.446946 2136 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:16.448259 kubelet[2136]: I1108 00:20:16.448211 2136 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:20:16.448259 kubelet[2136]: I1108 00:20:16.448250 2136 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:20:16.448369 kubelet[2136]: I1108 00:20:16.448270 2136 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:20:16.448369 kubelet[2136]: E1108 00:20:16.448323 2136 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:20:16.451383 kubelet[2136]: E1108 00:20:16.449719 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:20:16.481285 kubelet[2136]: I1108 00:20:16.481183 2136 policy_none.go:49] "None policy: Start" Nov 8 00:20:16.481285 kubelet[2136]: I1108 00:20:16.481255 2136 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:20:16.481285 kubelet[2136]: I1108 00:20:16.481277 2136 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:20:16.486944 kubelet[2136]: I1108 00:20:16.486893 2136 policy_none.go:47] "Start" Nov 8 00:20:16.495002 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:20:16.507672 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:20:16.511450 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:20:16.522119 kubelet[2136]: E1108 00:20:16.522073 2136 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:20:16.522419 kubelet[2136]: I1108 00:20:16.522389 2136 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:20:16.522419 kubelet[2136]: I1108 00:20:16.522407 2136 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:20:16.522747 kubelet[2136]: I1108 00:20:16.522718 2136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:20:16.523767 kubelet[2136]: E1108 00:20:16.523686 2136 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:20:16.523767 kubelet[2136]: E1108 00:20:16.523741 2136 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:20:16.624031 kubelet[2136]: I1108 00:20:16.623910 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:20:16.625100 kubelet[2136]: E1108 00:20:16.625075 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Nov 8 00:20:16.625981 systemd[1]: Created slice kubepods-burstable-pod53b56128c2aa627547fca129f5394407.slice - libcontainer container kubepods-burstable-pod53b56128c2aa627547fca129f5394407.slice. Nov 8 00:20:16.627386 kubelet[2136]: I1108 00:20:16.627346 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:16.627386 kubelet[2136]: I1108 00:20:16.627382 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:16.627576 kubelet[2136]: I1108 00:20:16.627406 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:16.627576 kubelet[2136]: I1108 00:20:16.627446 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53b56128c2aa627547fca129f5394407-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53b56128c2aa627547fca129f5394407\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:20:16.627576 kubelet[2136]: I1108 00:20:16.627465 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53b56128c2aa627547fca129f5394407-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53b56128c2aa627547fca129f5394407\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:20:16.627576 kubelet[2136]: I1108 00:20:16.627483 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53b56128c2aa627547fca129f5394407-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53b56128c2aa627547fca129f5394407\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:20:16.627576 kubelet[2136]: I1108 00:20:16.627498 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:16.627688 kubelet[2136]: I1108 00:20:16.627529 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:16.627777 kubelet[2136]: E1108 00:20:16.627747 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Nov 8 00:20:16.633851 kubelet[2136]: E1108 00:20:16.633802 2136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:20:16.641180 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 8 00:20:16.643327 kubelet[2136]: E1108 00:20:16.643271 2136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:20:16.727901 kubelet[2136]: I1108 00:20:16.727847 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:20:16.743091 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 8 00:20:16.745150 kubelet[2136]: E1108 00:20:16.745117 2136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:20:16.826934 kubelet[2136]: I1108 00:20:16.826899 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:20:16.827516 kubelet[2136]: E1108 00:20:16.827459 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Nov 8 00:20:16.938198 kubelet[2136]: E1108 00:20:16.938142 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:16.939286 containerd[1462]: time="2025-11-08T00:20:16.939228629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53b56128c2aa627547fca129f5394407,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:16.947107 kubelet[2136]: E1108 00:20:16.947072 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:16.947739 containerd[1462]: time="2025-11-08T00:20:16.947696429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:17.028729 kubelet[2136]: E1108 00:20:17.028661 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Nov 8 00:20:17.050128 kubelet[2136]: E1108 00:20:17.050042 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:17.050692 containerd[1462]: time="2025-11-08T00:20:17.050638288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:17.229070 kubelet[2136]: I1108 00:20:17.228938 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:20:17.229551 kubelet[2136]: E1108 00:20:17.229425 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Nov 8 00:20:17.463611 kubelet[2136]: E1108 00:20:17.463529 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:20:17.549810 kubelet[2136]: E1108 00:20:17.549593 2136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0167f970de7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:20:16.419925479 +0000 UTC m=+1.785158573,LastTimestamp:2025-11-08 00:20:16.419925479 +0000 UTC m=+1.785158573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:20:17.807776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586024918.mount: Deactivated successfully. Nov 8 00:20:17.814617 containerd[1462]: time="2025-11-08T00:20:17.814570171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:17.815696 containerd[1462]: time="2025-11-08T00:20:17.815632413Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:17.816548 containerd[1462]: time="2025-11-08T00:20:17.816453821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:20:17.817447 containerd[1462]: time="2025-11-08T00:20:17.817404083Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:17.818234 containerd[1462]: time="2025-11-08T00:20:17.818183847Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:20:17.819096 containerd[1462]: time="2025-11-08T00:20:17.819054055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:20:17.820121 containerd[1462]: time="2025-11-08T00:20:17.820070452Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:17.825415 containerd[1462]: time="2025-11-08T00:20:17.825366757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:17.826621 containerd[1462]: time="2025-11-08T00:20:17.826578172Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 878.78782ms" Nov 8 00:20:17.827421 containerd[1462]: time="2025-11-08T00:20:17.827382927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 888.053671ms" Nov 8 00:20:17.829691 kubelet[2136]: E1108 00:20:17.829640 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="1.6s" Nov 8 00:20:17.831849 containerd[1462]: time="2025-11-08T00:20:17.831796658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 781.062564ms" Nov 8 00:20:17.854463 kubelet[2136]: E1108 00:20:17.854405 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:20:17.863167 kubelet[2136]: E1108 00:20:17.863139 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:20:17.955916 kubelet[2136]: E1108 00:20:17.955841 2136 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:20:18.032493 kubelet[2136]: I1108 00:20:18.032445 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:20:18.032837 kubelet[2136]: E1108 00:20:18.032807 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Nov 8 00:20:18.073609 containerd[1462]: time="2025-11-08T00:20:18.071721287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:18.073609 containerd[1462]: time="2025-11-08T00:20:18.073492209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:18.073609 containerd[1462]: time="2025-11-08T00:20:18.073543473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:18.074217 containerd[1462]: time="2025-11-08T00:20:18.073717916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:18.076158 containerd[1462]: time="2025-11-08T00:20:18.076054773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:18.076158 containerd[1462]: time="2025-11-08T00:20:18.076130556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:18.076158 containerd[1462]: time="2025-11-08T00:20:18.076149545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:18.076370 containerd[1462]: time="2025-11-08T00:20:18.076243605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:18.083967 containerd[1462]: time="2025-11-08T00:20:18.083427306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:18.087175 containerd[1462]: time="2025-11-08T00:20:18.086922456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:18.087175 containerd[1462]: time="2025-11-08T00:20:18.086961394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:18.087175 containerd[1462]: time="2025-11-08T00:20:18.087082309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:18.135118 systemd[1]: Started cri-containerd-d6ac684f44b70d35c1d2a11c6fcd6b6754d33eca951829578565a71cf5b54f3d.scope - libcontainer container d6ac684f44b70d35c1d2a11c6fcd6b6754d33eca951829578565a71cf5b54f3d. Nov 8 00:20:18.140231 systemd[1]: Started cri-containerd-070200a6613dacdb2a12cbd5804e0f55ec9aa8ebe2b271a9b1461d16d285f79f.scope - libcontainer container 070200a6613dacdb2a12cbd5804e0f55ec9aa8ebe2b271a9b1461d16d285f79f. Nov 8 00:20:18.144787 systemd[1]: Started cri-containerd-16df0cf7eb1b4865466db1cf3404299060317c74e2244cd3e7079c8c05c6fd9e.scope - libcontainer container 16df0cf7eb1b4865466db1cf3404299060317c74e2244cd3e7079c8c05c6fd9e. Nov 8 00:20:18.192781 containerd[1462]: time="2025-11-08T00:20:18.192531735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6ac684f44b70d35c1d2a11c6fcd6b6754d33eca951829578565a71cf5b54f3d\"" Nov 8 00:20:18.194531 kubelet[2136]: E1108 00:20:18.194232 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:18.202575 containerd[1462]: time="2025-11-08T00:20:18.202490388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53b56128c2aa627547fca129f5394407,Namespace:kube-system,Attempt:0,} returns sandbox id \"070200a6613dacdb2a12cbd5804e0f55ec9aa8ebe2b271a9b1461d16d285f79f\"" Nov 8 00:20:18.203149 kubelet[2136]: E1108 00:20:18.203119 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:18.204785 containerd[1462]: time="2025-11-08T00:20:18.203706108Z" level=info msg="CreateContainer within sandbox \"d6ac684f44b70d35c1d2a11c6fcd6b6754d33eca951829578565a71cf5b54f3d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:20:18.205200 containerd[1462]: time="2025-11-08T00:20:18.205160709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"16df0cf7eb1b4865466db1cf3404299060317c74e2244cd3e7079c8c05c6fd9e\"" Nov 8 00:20:18.205924 kubelet[2136]: E1108 00:20:18.205756 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:18.209496 containerd[1462]: time="2025-11-08T00:20:18.209457551Z" level=info msg="CreateContainer within sandbox \"070200a6613dacdb2a12cbd5804e0f55ec9aa8ebe2b271a9b1461d16d285f79f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:20:18.211931 containerd[1462]: time="2025-11-08T00:20:18.211892947Z" level=info msg="CreateContainer within sandbox \"16df0cf7eb1b4865466db1cf3404299060317c74e2244cd3e7079c8c05c6fd9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:20:18.231284 containerd[1462]: time="2025-11-08T00:20:18.231222445Z" level=info msg="CreateContainer within sandbox \"d6ac684f44b70d35c1d2a11c6fcd6b6754d33eca951829578565a71cf5b54f3d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a01f609bff352db64b5d192b8d747ff99f6750a05715012417fa0a73b5f06447\"" Nov 8 00:20:18.232059 containerd[1462]: time="2025-11-08T00:20:18.232021471Z" level=info msg="StartContainer for \"a01f609bff352db64b5d192b8d747ff99f6750a05715012417fa0a73b5f06447\"" Nov 8 00:20:18.236063 containerd[1462]: time="2025-11-08T00:20:18.236012655Z" level=info msg="CreateContainer within sandbox \"070200a6613dacdb2a12cbd5804e0f55ec9aa8ebe2b271a9b1461d16d285f79f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2699b440c3887b5ea68823658967f9c0807cebeda166833d02714a6a210ed461\"" Nov 8 00:20:18.237160 containerd[1462]: time="2025-11-08T00:20:18.237119043Z" level=info msg="CreateContainer within sandbox \"16df0cf7eb1b4865466db1cf3404299060317c74e2244cd3e7079c8c05c6fd9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47295162431be1467c782f2db829f07073b2edad45a73ec0a6b6ebeec18d9ba8\"" Nov 8 00:20:18.238927 containerd[1462]: time="2025-11-08T00:20:18.238848641Z" level=info msg="StartContainer for \"2699b440c3887b5ea68823658967f9c0807cebeda166833d02714a6a210ed461\"" Nov 8 00:20:18.238981 containerd[1462]: time="2025-11-08T00:20:18.238873311Z" level=info msg="StartContainer for \"47295162431be1467c782f2db829f07073b2edad45a73ec0a6b6ebeec18d9ba8\"" Nov 8 00:20:18.269061 systemd[1]: Started cri-containerd-a01f609bff352db64b5d192b8d747ff99f6750a05715012417fa0a73b5f06447.scope - libcontainer container a01f609bff352db64b5d192b8d747ff99f6750a05715012417fa0a73b5f06447. Nov 8 00:20:18.274531 systemd[1]: Started cri-containerd-2699b440c3887b5ea68823658967f9c0807cebeda166833d02714a6a210ed461.scope - libcontainer container 2699b440c3887b5ea68823658967f9c0807cebeda166833d02714a6a210ed461. Nov 8 00:20:18.279367 systemd[1]: Started cri-containerd-47295162431be1467c782f2db829f07073b2edad45a73ec0a6b6ebeec18d9ba8.scope - libcontainer container 47295162431be1467c782f2db829f07073b2edad45a73ec0a6b6ebeec18d9ba8. Nov 8 00:20:18.353424 containerd[1462]: time="2025-11-08T00:20:18.353072084Z" level=info msg="StartContainer for \"a01f609bff352db64b5d192b8d747ff99f6750a05715012417fa0a73b5f06447\" returns successfully" Nov 8 00:20:18.353424 containerd[1462]: time="2025-11-08T00:20:18.353337080Z" level=info msg="StartContainer for \"47295162431be1467c782f2db829f07073b2edad45a73ec0a6b6ebeec18d9ba8\" returns successfully" Nov 8 00:20:18.353424 containerd[1462]: time="2025-11-08T00:20:18.353375116Z" level=info msg="StartContainer for \"2699b440c3887b5ea68823658967f9c0807cebeda166833d02714a6a210ed461\" returns successfully" Nov 8 00:20:18.457532 kubelet[2136]: E1108 00:20:18.457480 2136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:20:18.457995 kubelet[2136]: E1108 00:20:18.457670 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:18.459994 kubelet[2136]: E1108 00:20:18.459770 2136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:20:18.459994 kubelet[2136]: E1108 00:20:18.459930 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:18.463276 kubelet[2136]: E1108 00:20:18.463255 2136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:20:18.463393 kubelet[2136]: E1108 00:20:18.463368 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:19.465233 kubelet[2136]: E1108 00:20:19.465161 2136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:20:19.466149 kubelet[2136]: E1108 00:20:19.465345 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:19.466149 kubelet[2136]: E1108 00:20:19.465436 2136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:20:19.466149 kubelet[2136]: E1108 00:20:19.465578 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:19.634841 kubelet[2136]: I1108 00:20:19.634156 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:20:19.982267 kubelet[2136]: E1108 00:20:19.982224 2136 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:20:20.045593 kubelet[2136]: I1108 00:20:20.045540 2136 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:20:20.045593 kubelet[2136]: E1108 00:20:20.045595 2136 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 8 00:20:20.060301 kubelet[2136]: E1108 00:20:20.060257 2136 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:20:20.161123 kubelet[2136]: E1108 00:20:20.161065 2136 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:20:20.327406 kubelet[2136]: I1108 00:20:20.327194 2136 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:20:20.333610 kubelet[2136]: E1108 00:20:20.333570 2136 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:20:20.333610 kubelet[2136]: I1108 00:20:20.333602 2136 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:20:20.335383 kubelet[2136]: E1108 00:20:20.335354 2136 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:20:20.335383 kubelet[2136]: I1108 00:20:20.335379 2136 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:20.337645 kubelet[2136]: E1108 00:20:20.337613 2136 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:20.413580 kubelet[2136]: I1108 00:20:20.413499 2136 apiserver.go:52] "Watching apiserver" Nov 8 00:20:20.427203 kubelet[2136]: I1108 00:20:20.427160 2136 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:20:22.487323 systemd[1]: Reloading requested from client PID 2425 ('systemctl') (unit session-7.scope)... Nov 8 00:20:22.487342 systemd[1]: Reloading... Nov 8 00:20:22.580926 zram_generator::config[2467]: No configuration found. Nov 8 00:20:22.698110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:22.701803 kubelet[2136]: I1108 00:20:22.701769 2136 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:20:22.715491 kubelet[2136]: E1108 00:20:22.715403 2136 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:22.795389 systemd[1]: Reloading finished in 307 ms. Nov 8 00:20:22.842338 kubelet[2136]: I1108 00:20:22.842263 2136 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:20:22.842524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:22.853426 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:20:22.853700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:22.853754 systemd[1]: kubelet.service: Consumed 1.024s CPU time, 128.6M memory peak, 0B memory swap peak. Nov 8 00:20:22.865173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:23.038811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:23.045680 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:20:23.101884 kubelet[2509]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:20:23.101884 kubelet[2509]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:23.102381 kubelet[2509]: I1108 00:20:23.101948 2509 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:20:23.110097 kubelet[2509]: I1108 00:20:23.110041 2509 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:20:23.110097 kubelet[2509]: I1108 00:20:23.110080 2509 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:20:23.110184 kubelet[2509]: I1108 00:20:23.110118 2509 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:20:23.110184 kubelet[2509]: I1108 00:20:23.110126 2509 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:20:23.110518 kubelet[2509]: I1108 00:20:23.110486 2509 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:20:23.112066 kubelet[2509]: I1108 00:20:23.112033 2509 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:20:23.114660 kubelet[2509]: I1108 00:20:23.114592 2509 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:20:23.118993 kubelet[2509]: E1108 00:20:23.118948 2509 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:20:23.119189 kubelet[2509]: I1108 00:20:23.119046 2509 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:20:23.125279 kubelet[2509]: I1108 00:20:23.125242 2509 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:20:23.125573 kubelet[2509]: I1108 00:20:23.125520 2509 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:20:23.125785 kubelet[2509]: I1108 00:20:23.125560 2509 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:20:23.125785 kubelet[2509]: I1108 00:20:23.125784 2509 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:20:23.125923 kubelet[2509]: I1108 00:20:23.125797 2509 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:20:23.125923 kubelet[2509]: I1108 00:20:23.125828 2509 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:20:23.126979 kubelet[2509]: I1108 00:20:23.126943 2509 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:23.127183 kubelet[2509]: I1108 00:20:23.127153 2509 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:20:23.127183 kubelet[2509]: I1108 00:20:23.127176 2509 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:20:23.127246 kubelet[2509]: I1108 00:20:23.127206 2509 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:20:23.127246 kubelet[2509]: I1108 00:20:23.127222 2509 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:20:23.130310 kubelet[2509]: I1108 00:20:23.130273 2509 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:20:23.130829 kubelet[2509]: I1108 00:20:23.130797 2509 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:20:23.130909 kubelet[2509]: I1108 00:20:23.130836 2509 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:20:23.137296 kubelet[2509]: I1108 00:20:23.134471 2509 server.go:1262] "Started kubelet" Nov 8 00:20:23.137296 kubelet[2509]: I1108 00:20:23.134652 2509 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:20:23.137296 kubelet[2509]: I1108 00:20:23.134782 2509 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:20:23.137296 kubelet[2509]: I1108 00:20:23.135319 2509 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:20:23.137296 kubelet[2509]: I1108 00:20:23.135424 2509 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:20:23.139741 kubelet[2509]: I1108 00:20:23.139696 2509 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:20:23.140052 kubelet[2509]: I1108 00:20:23.140033 2509 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:20:23.142599 kubelet[2509]: I1108 00:20:23.135622 2509 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:20:23.143377 kubelet[2509]: I1108 00:20:23.143343 2509 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:20:23.143793 kubelet[2509]: I1108 00:20:23.143765 2509 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:20:23.144033 kubelet[2509]: I1108 00:20:23.143996 2509 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:20:23.145062 kubelet[2509]: I1108 00:20:23.144891 2509 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:20:23.145133 kubelet[2509]: I1108 00:20:23.145097 2509 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:20:23.147929 kubelet[2509]: I1108 00:20:23.147589 2509 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:20:23.154654 kubelet[2509]: E1108 00:20:23.154610 2509 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:20:23.167850 kubelet[2509]: I1108 00:20:23.167798 2509 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:20:23.169809 kubelet[2509]: I1108 00:20:23.169786 2509 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:20:23.169809 kubelet[2509]: I1108 00:20:23.169810 2509 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:20:23.169943 kubelet[2509]: I1108 00:20:23.169837 2509 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:20:23.170176 kubelet[2509]: E1108 00:20:23.170132 2509 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:20:23.186173 kubelet[2509]: I1108 00:20:23.186141 2509 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:20:23.186173 kubelet[2509]: I1108 00:20:23.186164 2509 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:20:23.186337 kubelet[2509]: I1108 00:20:23.186191 2509 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:23.186389 kubelet[2509]: I1108 00:20:23.186371 2509 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:20:23.186412 kubelet[2509]: I1108 00:20:23.186391 2509 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:20:23.186437 kubelet[2509]: I1108 00:20:23.186418 2509 policy_none.go:49] "None policy: Start" Nov 8 00:20:23.186437 kubelet[2509]: I1108 00:20:23.186431 2509 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:20:23.186478 kubelet[2509]: I1108 00:20:23.186446 2509 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:20:23.186591 kubelet[2509]: I1108 00:20:23.186574 2509 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 8 00:20:23.186623 kubelet[2509]: I1108 00:20:23.186605 2509 policy_none.go:47] "Start" Nov 8 00:20:23.191437 kubelet[2509]: E1108 00:20:23.191310 2509 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:20:23.191564 kubelet[2509]: I1108 00:20:23.191540 2509 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:20:23.191589 kubelet[2509]: I1108 00:20:23.191557 2509 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:20:23.191786 kubelet[2509]: I1108 00:20:23.191762 2509 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:20:23.198907 kubelet[2509]: E1108 00:20:23.197449 2509 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:20:23.271037 kubelet[2509]: I1108 00:20:23.270982 2509 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:23.271251 kubelet[2509]: I1108 00:20:23.271225 2509 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:20:23.271525 kubelet[2509]: I1108 00:20:23.271496 2509 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:20:23.280090 kubelet[2509]: E1108 00:20:23.280006 2509 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:20:23.305625 kubelet[2509]: I1108 00:20:23.305507 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:20:23.316117 kubelet[2509]: I1108 00:20:23.316075 2509 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:20:23.316280 kubelet[2509]: I1108 00:20:23.316178 2509 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:20:23.345852 kubelet[2509]: I1108 00:20:23.345790 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:23.345852 kubelet[2509]: I1108 00:20:23.345836 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:23.345852 kubelet[2509]: I1108 00:20:23.345877 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:23.346207 kubelet[2509]: I1108 00:20:23.345929 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:20:23.346207 kubelet[2509]: I1108 00:20:23.345971 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53b56128c2aa627547fca129f5394407-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53b56128c2aa627547fca129f5394407\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:20:23.346207 kubelet[2509]: I1108 00:20:23.345991 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:23.346207 kubelet[2509]: I1108 00:20:23.346008 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:23.346207 kubelet[2509]: I1108 00:20:23.346040 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53b56128c2aa627547fca129f5394407-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53b56128c2aa627547fca129f5394407\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:20:23.346351 kubelet[2509]: I1108 00:20:23.346063 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53b56128c2aa627547fca129f5394407-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53b56128c2aa627547fca129f5394407\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:20:23.579101 kubelet[2509]: E1108 00:20:23.578902 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:23.580844 kubelet[2509]: E1108 00:20:23.580785 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:23.581064 kubelet[2509]: E1108 00:20:23.580917 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:24.128190 kubelet[2509]: I1108 00:20:24.128128 2509 apiserver.go:52] "Watching apiserver" Nov 8 00:20:24.144794 kubelet[2509]: I1108 00:20:24.144772 2509 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:20:24.182779 kubelet[2509]: I1108 00:20:24.182567 2509 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:24.182779 kubelet[2509]: I1108 00:20:24.182657 2509 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:20:24.182779 kubelet[2509]: E1108 00:20:24.182670 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:24.299288 kubelet[2509]: E1108 00:20:24.298413 2509 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:20:24.299288 kubelet[2509]: E1108 00:20:24.298947 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:24.299740 kubelet[2509]: E1108 00:20:24.299712 2509 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:20:24.299855 kubelet[2509]: E1108 00:20:24.299834 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:24.390965 kubelet[2509]: I1108 00:20:24.390800 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.390762679 podStartE2EDuration="1.390762679s" podCreationTimestamp="2025-11-08 00:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:24.365918211 +0000 UTC m=+1.314312665" watchObservedRunningTime="2025-11-08 00:20:24.390762679 +0000 UTC m=+1.339157123" Nov 8 00:20:24.526971 kubelet[2509]: I1108 00:20:24.526908 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.526889502 podStartE2EDuration="1.526889502s" podCreationTimestamp="2025-11-08 00:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:24.526712981 +0000 UTC m=+1.475107435" watchObservedRunningTime="2025-11-08 00:20:24.526889502 +0000 UTC m=+1.475283956" Nov 8 00:20:24.617298 kubelet[2509]: I1108 00:20:24.617196 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.617066234 podStartE2EDuration="2.617066234s" podCreationTimestamp="2025-11-08 00:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:24.606491584 +0000 UTC m=+1.554886038" watchObservedRunningTime="2025-11-08 00:20:24.617066234 +0000 UTC m=+1.565460688" Nov 8 00:20:25.184021 kubelet[2509]: E1108 00:20:25.183975 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:25.184547 kubelet[2509]: E1108 00:20:25.183975 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:25.184547 kubelet[2509]: E1108 00:20:25.184102 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:26.185915 kubelet[2509]: E1108 00:20:26.185848 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:27.494457 kubelet[2509]: E1108 00:20:27.494401 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:29.302529 kubelet[2509]: I1108 00:20:29.302490 2509 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:20:29.303009 containerd[1462]: time="2025-11-08T00:20:29.302954474Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:20:29.303275 kubelet[2509]: I1108 00:20:29.303150 2509 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:20:30.958483 systemd[1]: Created slice kubepods-besteffort-pod1be59b97_77f6_4a75_b794_8a04f4644678.slice - libcontainer container kubepods-besteffort-pod1be59b97_77f6_4a75_b794_8a04f4644678.slice. Nov 8 00:20:30.971554 systemd[1]: Created slice kubepods-besteffort-poddcf06385_0939_4f92_8959_c84c539d9323.slice - libcontainer container kubepods-besteffort-poddcf06385_0939_4f92_8959_c84c539d9323.slice. Nov 8 00:20:31.011016 kubelet[2509]: I1108 00:20:31.010945 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1be59b97-77f6-4a75-b794-8a04f4644678-kube-proxy\") pod \"kube-proxy-4l5m2\" (UID: \"1be59b97-77f6-4a75-b794-8a04f4644678\") " pod="kube-system/kube-proxy-4l5m2" Nov 8 00:20:31.011016 kubelet[2509]: I1108 00:20:31.011018 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kngg\" (UniqueName: \"kubernetes.io/projected/dcf06385-0939-4f92-8959-c84c539d9323-kube-api-access-5kngg\") pod \"tigera-operator-65cdcdfd6d-nx2vp\" (UID: \"dcf06385-0939-4f92-8959-c84c539d9323\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-nx2vp" Nov 8 00:20:31.011604 kubelet[2509]: I1108 00:20:31.011047 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1be59b97-77f6-4a75-b794-8a04f4644678-lib-modules\") pod \"kube-proxy-4l5m2\" (UID: \"1be59b97-77f6-4a75-b794-8a04f4644678\") " pod="kube-system/kube-proxy-4l5m2" Nov 8 00:20:31.011604 kubelet[2509]: I1108 00:20:31.011068 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dcf06385-0939-4f92-8959-c84c539d9323-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-nx2vp\" (UID: \"dcf06385-0939-4f92-8959-c84c539d9323\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-nx2vp" Nov 8 00:20:31.011604 kubelet[2509]: I1108 00:20:31.011115 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1be59b97-77f6-4a75-b794-8a04f4644678-xtables-lock\") pod \"kube-proxy-4l5m2\" (UID: \"1be59b97-77f6-4a75-b794-8a04f4644678\") " pod="kube-system/kube-proxy-4l5m2" Nov 8 00:20:31.011604 kubelet[2509]: I1108 00:20:31.011149 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w78q\" (UniqueName: \"kubernetes.io/projected/1be59b97-77f6-4a75-b794-8a04f4644678-kube-api-access-2w78q\") pod \"kube-proxy-4l5m2\" (UID: \"1be59b97-77f6-4a75-b794-8a04f4644678\") " pod="kube-system/kube-proxy-4l5m2" Nov 8 00:20:31.276126 kubelet[2509]: E1108 00:20:31.275974 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:31.277302 containerd[1462]: time="2025-11-08T00:20:31.277261212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4l5m2,Uid:1be59b97-77f6-4a75-b794-8a04f4644678,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:31.300258 containerd[1462]: time="2025-11-08T00:20:31.300178665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-nx2vp,Uid:dcf06385-0939-4f92-8959-c84c539d9323,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:20:31.524698 kubelet[2509]: E1108 00:20:31.524607 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:31.851643 containerd[1462]: time="2025-11-08T00:20:31.849094797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:31.851643 containerd[1462]: time="2025-11-08T00:20:31.851456954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:31.851643 containerd[1462]: time="2025-11-08T00:20:31.851486189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:31.851901 containerd[1462]: time="2025-11-08T00:20:31.851593254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:31.877625 containerd[1462]: time="2025-11-08T00:20:31.877345481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:31.877625 containerd[1462]: time="2025-11-08T00:20:31.877405756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:31.877625 containerd[1462]: time="2025-11-08T00:20:31.877418521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:31.877625 containerd[1462]: time="2025-11-08T00:20:31.877520276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:31.878045 systemd[1]: Started cri-containerd-0f7d023a66fdcbe96999ec468fba358f7a8e76d12dd7f60e8c417c48dfc3bb10.scope - libcontainer container 0f7d023a66fdcbe96999ec468fba358f7a8e76d12dd7f60e8c417c48dfc3bb10. Nov 8 00:20:31.907053 systemd[1]: Started cri-containerd-9304046dbb75b4deaa2d6930520a91587755fc1d5342ad15a98fcd025f0274d0.scope - libcontainer container 9304046dbb75b4deaa2d6930520a91587755fc1d5342ad15a98fcd025f0274d0. Nov 8 00:20:31.918527 containerd[1462]: time="2025-11-08T00:20:31.918451247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4l5m2,Uid:1be59b97-77f6-4a75-b794-8a04f4644678,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f7d023a66fdcbe96999ec468fba358f7a8e76d12dd7f60e8c417c48dfc3bb10\"" Nov 8 00:20:31.921018 kubelet[2509]: E1108 00:20:31.920961 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:31.953153 containerd[1462]: time="2025-11-08T00:20:31.953111732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-nx2vp,Uid:dcf06385-0939-4f92-8959-c84c539d9323,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9304046dbb75b4deaa2d6930520a91587755fc1d5342ad15a98fcd025f0274d0\"" Nov 8 00:20:31.954684 containerd[1462]: time="2025-11-08T00:20:31.954654950Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:20:31.958349 containerd[1462]: time="2025-11-08T00:20:31.958313119Z" level=info msg="CreateContainer within sandbox \"0f7d023a66fdcbe96999ec468fba358f7a8e76d12dd7f60e8c417c48dfc3bb10\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:20:32.199721 kubelet[2509]: E1108 00:20:32.199515 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:32.515444 containerd[1462]: time="2025-11-08T00:20:32.515283969Z" level=info msg="CreateContainer within sandbox \"0f7d023a66fdcbe96999ec468fba358f7a8e76d12dd7f60e8c417c48dfc3bb10\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d8c69439a5b6acedd24752d77b235e9363cc47b7e7f7fe4a6dbfec31f1972e0\"" Nov 8 00:20:32.516924 containerd[1462]: time="2025-11-08T00:20:32.516075304Z" level=info msg="StartContainer for \"9d8c69439a5b6acedd24752d77b235e9363cc47b7e7f7fe4a6dbfec31f1972e0\"" Nov 8 00:20:32.553014 systemd[1]: Started cri-containerd-9d8c69439a5b6acedd24752d77b235e9363cc47b7e7f7fe4a6dbfec31f1972e0.scope - libcontainer container 9d8c69439a5b6acedd24752d77b235e9363cc47b7e7f7fe4a6dbfec31f1972e0. Nov 8 00:20:32.680023 containerd[1462]: time="2025-11-08T00:20:32.679962568Z" level=info msg="StartContainer for \"9d8c69439a5b6acedd24752d77b235e9363cc47b7e7f7fe4a6dbfec31f1972e0\" returns successfully" Nov 8 00:20:33.201725 kubelet[2509]: E1108 00:20:33.201681 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:33.716888 update_engine[1455]: I20251108 00:20:33.716773 1455 update_attempter.cc:509] Updating boot flags... Nov 8 00:20:33.745914 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2827) Nov 8 00:20:33.794915 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2830) Nov 8 00:20:34.203523 kubelet[2509]: E1108 00:20:34.203485 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:34.416766 kubelet[2509]: E1108 00:20:34.416719 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:34.429285 kubelet[2509]: I1108 00:20:34.428961 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4l5m2" podStartSLOduration=4.428937482 podStartE2EDuration="4.428937482s" podCreationTimestamp="2025-11-08 00:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:33.283575292 +0000 UTC m=+10.231969736" watchObservedRunningTime="2025-11-08 00:20:34.428937482 +0000 UTC m=+11.377331956" Nov 8 00:20:34.608311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3997174875.mount: Deactivated successfully. Nov 8 00:20:35.696947 containerd[1462]: time="2025-11-08T00:20:35.696884717Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:35.697547 containerd[1462]: time="2025-11-08T00:20:35.697508859Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:20:35.698670 containerd[1462]: time="2025-11-08T00:20:35.698639366Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:35.702212 containerd[1462]: time="2025-11-08T00:20:35.702165286Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:35.703179 containerd[1462]: time="2025-11-08T00:20:35.703133645Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.748429372s" Nov 8 00:20:35.703236 containerd[1462]: time="2025-11-08T00:20:35.703178039Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:20:35.712761 containerd[1462]: time="2025-11-08T00:20:35.712720103Z" level=info msg="CreateContainer within sandbox \"9304046dbb75b4deaa2d6930520a91587755fc1d5342ad15a98fcd025f0274d0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:20:35.725329 containerd[1462]: time="2025-11-08T00:20:35.725293282Z" level=info msg="CreateContainer within sandbox \"9304046dbb75b4deaa2d6930520a91587755fc1d5342ad15a98fcd025f0274d0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"43daf9fe68b0fa0ad25083d3e1a6158887e2ebeecc9afa79069c5b142582fbd8\"" Nov 8 00:20:35.725789 containerd[1462]: time="2025-11-08T00:20:35.725755655Z" level=info msg="StartContainer for \"43daf9fe68b0fa0ad25083d3e1a6158887e2ebeecc9afa79069c5b142582fbd8\"" Nov 8 00:20:35.777035 systemd[1]: Started cri-containerd-43daf9fe68b0fa0ad25083d3e1a6158887e2ebeecc9afa79069c5b142582fbd8.scope - libcontainer container 43daf9fe68b0fa0ad25083d3e1a6158887e2ebeecc9afa79069c5b142582fbd8. Nov 8 00:20:35.853444 containerd[1462]: time="2025-11-08T00:20:35.853365774Z" level=info msg="StartContainer for \"43daf9fe68b0fa0ad25083d3e1a6158887e2ebeecc9afa79069c5b142582fbd8\" returns successfully" Nov 8 00:20:36.283905 kubelet[2509]: I1108 00:20:36.282118 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-nx2vp" podStartSLOduration=2.532446017 podStartE2EDuration="6.282088797s" podCreationTimestamp="2025-11-08 00:20:30 +0000 UTC" firstStartedPulling="2025-11-08 00:20:31.954334375 +0000 UTC m=+8.902728829" lastFinishedPulling="2025-11-08 00:20:35.703977154 +0000 UTC m=+12.652371609" observedRunningTime="2025-11-08 00:20:36.281936897 +0000 UTC m=+13.230331351" watchObservedRunningTime="2025-11-08 00:20:36.282088797 +0000 UTC m=+13.230483251" Nov 8 00:20:37.505180 kubelet[2509]: E1108 00:20:37.505118 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:38.218153 kubelet[2509]: E1108 00:20:38.218093 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:41.492356 sudo[1648]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:41.496343 sshd[1645]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:41.502726 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:20:41.505896 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:44002.service: Deactivated successfully. Nov 8 00:20:41.510213 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:20:41.510638 systemd[1]: session-7.scope: Consumed 6.769s CPU time, 160.3M memory peak, 0B memory swap peak. Nov 8 00:20:41.512248 systemd-logind[1454]: Removed session 7. Nov 8 00:20:46.214605 systemd[1]: Created slice kubepods-besteffort-pod5b0f2d27_dd39_4141_86fd_d9afc7136aed.slice - libcontainer container kubepods-besteffort-pod5b0f2d27_dd39_4141_86fd_d9afc7136aed.slice. Nov 8 00:20:46.267755 systemd[1]: Created slice kubepods-besteffort-pod666165e7_10ad_460e_96f6_573e5f418d1c.slice - libcontainer container kubepods-besteffort-pod666165e7_10ad_460e_96f6_573e5f418d1c.slice. Nov 8 00:20:46.310992 kubelet[2509]: I1108 00:20:46.310920 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/666165e7-10ad-460e-96f6-573e5f418d1c-var-lib-calico\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.310992 kubelet[2509]: I1108 00:20:46.310974 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5b0f2d27-dd39-4141-86fd-d9afc7136aed-typha-certs\") pod \"calico-typha-7ddc7488bf-2mjf4\" (UID: \"5b0f2d27-dd39-4141-86fd-d9afc7136aed\") " pod="calico-system/calico-typha-7ddc7488bf-2mjf4" Nov 8 00:20:46.310992 kubelet[2509]: I1108 00:20:46.310993 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txppg\" (UniqueName: \"kubernetes.io/projected/5b0f2d27-dd39-4141-86fd-d9afc7136aed-kube-api-access-txppg\") pod \"calico-typha-7ddc7488bf-2mjf4\" (UID: \"5b0f2d27-dd39-4141-86fd-d9afc7136aed\") " pod="calico-system/calico-typha-7ddc7488bf-2mjf4" Nov 8 00:20:46.311634 kubelet[2509]: I1108 00:20:46.311011 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/666165e7-10ad-460e-96f6-573e5f418d1c-cni-log-dir\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311634 kubelet[2509]: I1108 00:20:46.311027 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/666165e7-10ad-460e-96f6-573e5f418d1c-policysync\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311634 kubelet[2509]: I1108 00:20:46.311045 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/666165e7-10ad-460e-96f6-573e5f418d1c-xtables-lock\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311634 kubelet[2509]: I1108 00:20:46.311063 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tp4b\" (UniqueName: \"kubernetes.io/projected/666165e7-10ad-460e-96f6-573e5f418d1c-kube-api-access-4tp4b\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311634 kubelet[2509]: I1108 00:20:46.311087 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/666165e7-10ad-460e-96f6-573e5f418d1c-node-certs\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311763 kubelet[2509]: I1108 00:20:46.311154 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/666165e7-10ad-460e-96f6-573e5f418d1c-cni-bin-dir\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311763 kubelet[2509]: I1108 00:20:46.311192 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/666165e7-10ad-460e-96f6-573e5f418d1c-lib-modules\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311763 kubelet[2509]: I1108 00:20:46.311215 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/666165e7-10ad-460e-96f6-573e5f418d1c-tigera-ca-bundle\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311763 kubelet[2509]: I1108 00:20:46.311247 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/666165e7-10ad-460e-96f6-573e5f418d1c-var-run-calico\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311763 kubelet[2509]: I1108 00:20:46.311284 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b0f2d27-dd39-4141-86fd-d9afc7136aed-tigera-ca-bundle\") pod \"calico-typha-7ddc7488bf-2mjf4\" (UID: \"5b0f2d27-dd39-4141-86fd-d9afc7136aed\") " pod="calico-system/calico-typha-7ddc7488bf-2mjf4" Nov 8 00:20:46.311956 kubelet[2509]: I1108 00:20:46.311305 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/666165e7-10ad-460e-96f6-573e5f418d1c-cni-net-dir\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.311956 kubelet[2509]: I1108 00:20:46.311319 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/666165e7-10ad-460e-96f6-573e5f418d1c-flexvol-driver-host\") pod \"calico-node-hqdfk\" (UID: \"666165e7-10ad-460e-96f6-573e5f418d1c\") " pod="calico-system/calico-node-hqdfk" Nov 8 00:20:46.410430 kubelet[2509]: E1108 00:20:46.410363 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:20:46.421528 kubelet[2509]: E1108 00:20:46.421426 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.421528 kubelet[2509]: W1108 00:20:46.421453 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.421528 kubelet[2509]: E1108 00:20:46.421482 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.425810 kubelet[2509]: E1108 00:20:46.425643 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.425810 kubelet[2509]: W1108 00:20:46.425693 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.425810 kubelet[2509]: E1108 00:20:46.425719 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.431337 kubelet[2509]: E1108 00:20:46.431199 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.431337 kubelet[2509]: W1108 00:20:46.431227 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.431337 kubelet[2509]: E1108 00:20:46.431250 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.432016 kubelet[2509]: E1108 00:20:46.431554 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.432016 kubelet[2509]: W1108 00:20:46.431568 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.432016 kubelet[2509]: E1108 00:20:46.431583 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.436886 kubelet[2509]: E1108 00:20:46.436853 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.436886 kubelet[2509]: W1108 00:20:46.436880 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.436979 kubelet[2509]: E1108 00:20:46.436900 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.481368 kubelet[2509]: E1108 00:20:46.481214 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.481368 kubelet[2509]: W1108 00:20:46.481239 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.481368 kubelet[2509]: E1108 00:20:46.481290 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.481690 kubelet[2509]: E1108 00:20:46.481655 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.481690 kubelet[2509]: W1108 00:20:46.481678 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.481690 kubelet[2509]: E1108 00:20:46.481701 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.482034 kubelet[2509]: E1108 00:20:46.482005 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.482034 kubelet[2509]: W1108 00:20:46.482021 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.482034 kubelet[2509]: E1108 00:20:46.482033 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.482423 kubelet[2509]: E1108 00:20:46.482404 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.482423 kubelet[2509]: W1108 00:20:46.482419 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.482504 kubelet[2509]: E1108 00:20:46.482432 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.482744 kubelet[2509]: E1108 00:20:46.482721 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.482744 kubelet[2509]: W1108 00:20:46.482735 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.482855 kubelet[2509]: E1108 00:20:46.482768 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.483096 kubelet[2509]: E1108 00:20:46.483076 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.483096 kubelet[2509]: W1108 00:20:46.483091 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.483180 kubelet[2509]: E1108 00:20:46.483102 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.483425 kubelet[2509]: E1108 00:20:46.483405 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.483425 kubelet[2509]: W1108 00:20:46.483423 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.483540 kubelet[2509]: E1108 00:20:46.483438 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.483762 kubelet[2509]: E1108 00:20:46.483740 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.483762 kubelet[2509]: W1108 00:20:46.483756 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.483838 kubelet[2509]: E1108 00:20:46.483767 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.484820 kubelet[2509]: E1108 00:20:46.484433 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.484820 kubelet[2509]: W1108 00:20:46.484468 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.484820 kubelet[2509]: E1108 00:20:46.484481 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.485135 kubelet[2509]: E1108 00:20:46.485110 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.485171 kubelet[2509]: W1108 00:20:46.485135 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.485171 kubelet[2509]: E1108 00:20:46.485149 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.485800 kubelet[2509]: E1108 00:20:46.485698 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.485800 kubelet[2509]: W1108 00:20:46.485756 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.485800 kubelet[2509]: E1108 00:20:46.485772 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.486401 kubelet[2509]: E1108 00:20:46.486378 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.486401 kubelet[2509]: W1108 00:20:46.486397 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.486477 kubelet[2509]: E1108 00:20:46.486413 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.486792 kubelet[2509]: E1108 00:20:46.486769 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.486910 kubelet[2509]: W1108 00:20:46.486882 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.486976 kubelet[2509]: E1108 00:20:46.486909 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.487313 kubelet[2509]: E1108 00:20:46.487290 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.487313 kubelet[2509]: W1108 00:20:46.487307 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.487400 kubelet[2509]: E1108 00:20:46.487321 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.487642 kubelet[2509]: E1108 00:20:46.487621 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.487642 kubelet[2509]: W1108 00:20:46.487638 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.487709 kubelet[2509]: E1108 00:20:46.487651 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.487946 kubelet[2509]: E1108 00:20:46.487926 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.487946 kubelet[2509]: W1108 00:20:46.487941 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.488023 kubelet[2509]: E1108 00:20:46.487952 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.488213 kubelet[2509]: E1108 00:20:46.488194 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.488213 kubelet[2509]: W1108 00:20:46.488209 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.488284 kubelet[2509]: E1108 00:20:46.488220 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.488466 kubelet[2509]: E1108 00:20:46.488449 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.488466 kubelet[2509]: W1108 00:20:46.488463 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.488523 kubelet[2509]: E1108 00:20:46.488474 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.488703 kubelet[2509]: E1108 00:20:46.488686 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.488703 kubelet[2509]: W1108 00:20:46.488700 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.488759 kubelet[2509]: E1108 00:20:46.488711 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.488998 kubelet[2509]: E1108 00:20:46.488980 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.488998 kubelet[2509]: W1108 00:20:46.488995 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.489040 kubelet[2509]: E1108 00:20:46.489007 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.512549 kubelet[2509]: E1108 00:20:46.512510 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.512549 kubelet[2509]: W1108 00:20:46.512535 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.512549 kubelet[2509]: E1108 00:20:46.512562 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.512809 kubelet[2509]: I1108 00:20:46.512608 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7e1a5e5-d1e7-4901-bce6-3563db023294-kubelet-dir\") pod \"csi-node-driver-rmjcb\" (UID: \"a7e1a5e5-d1e7-4901-bce6-3563db023294\") " pod="calico-system/csi-node-driver-rmjcb" Nov 8 00:20:46.513005 kubelet[2509]: E1108 00:20:46.512976 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.513005 kubelet[2509]: W1108 00:20:46.513000 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.513061 kubelet[2509]: E1108 00:20:46.513010 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.513061 kubelet[2509]: I1108 00:20:46.513030 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxff\" (UniqueName: \"kubernetes.io/projected/a7e1a5e5-d1e7-4901-bce6-3563db023294-kube-api-access-mwxff\") pod \"csi-node-driver-rmjcb\" (UID: \"a7e1a5e5-d1e7-4901-bce6-3563db023294\") " pod="calico-system/csi-node-driver-rmjcb" Nov 8 00:20:46.513329 kubelet[2509]: E1108 00:20:46.513303 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.513380 kubelet[2509]: W1108 00:20:46.513327 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.513380 kubelet[2509]: E1108 00:20:46.513350 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.513653 kubelet[2509]: E1108 00:20:46.513640 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.513653 kubelet[2509]: W1108 00:20:46.513651 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.513717 kubelet[2509]: E1108 00:20:46.513660 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.513916 kubelet[2509]: E1108 00:20:46.513902 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.513916 kubelet[2509]: W1108 00:20:46.513912 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.513990 kubelet[2509]: E1108 00:20:46.513924 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.513990 kubelet[2509]: I1108 00:20:46.513952 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7e1a5e5-d1e7-4901-bce6-3563db023294-socket-dir\") pod \"csi-node-driver-rmjcb\" (UID: \"a7e1a5e5-d1e7-4901-bce6-3563db023294\") " pod="calico-system/csi-node-driver-rmjcb" Nov 8 00:20:46.514233 kubelet[2509]: E1108 00:20:46.514214 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.514233 kubelet[2509]: W1108 00:20:46.514229 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.514281 kubelet[2509]: E1108 00:20:46.514240 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.514927 kubelet[2509]: E1108 00:20:46.514898 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.514927 kubelet[2509]: W1108 00:20:46.514920 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.515100 kubelet[2509]: E1108 00:20:46.514934 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.515324 kubelet[2509]: E1108 00:20:46.515301 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.515324 kubelet[2509]: W1108 00:20:46.515317 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.515411 kubelet[2509]: E1108 00:20:46.515330 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.515411 kubelet[2509]: I1108 00:20:46.515348 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a7e1a5e5-d1e7-4901-bce6-3563db023294-varrun\") pod \"csi-node-driver-rmjcb\" (UID: \"a7e1a5e5-d1e7-4901-bce6-3563db023294\") " pod="calico-system/csi-node-driver-rmjcb" Nov 8 00:20:46.515691 kubelet[2509]: E1108 00:20:46.515673 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.515691 kubelet[2509]: W1108 00:20:46.515688 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.515776 kubelet[2509]: E1108 00:20:46.515699 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.518298 kubelet[2509]: I1108 00:20:46.515735 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7e1a5e5-d1e7-4901-bce6-3563db023294-registration-dir\") pod \"csi-node-driver-rmjcb\" (UID: \"a7e1a5e5-d1e7-4901-bce6-3563db023294\") " pod="calico-system/csi-node-driver-rmjcb" Nov 8 00:20:46.521140 kubelet[2509]: E1108 00:20:46.521112 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.521140 kubelet[2509]: W1108 00:20:46.521131 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.521263 kubelet[2509]: E1108 00:20:46.521149 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.521531 kubelet[2509]: E1108 00:20:46.521517 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.521622 kubelet[2509]: W1108 00:20:46.521604 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.521706 kubelet[2509]: E1108 00:20:46.521622 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.522192 kubelet[2509]: E1108 00:20:46.522127 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.522192 kubelet[2509]: W1108 00:20:46.522143 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.522192 kubelet[2509]: E1108 00:20:46.522157 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.523961 kubelet[2509]: E1108 00:20:46.523912 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.523961 kubelet[2509]: W1108 00:20:46.523948 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.524308 kubelet[2509]: E1108 00:20:46.523975 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.524442 kubelet[2509]: E1108 00:20:46.524407 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:46.525081 kubelet[2509]: E1108 00:20:46.525058 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.525081 kubelet[2509]: W1108 00:20:46.525077 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.525081 kubelet[2509]: E1108 00:20:46.525090 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.525432 kubelet[2509]: E1108 00:20:46.525414 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.525432 kubelet[2509]: W1108 00:20:46.525427 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.525568 kubelet[2509]: E1108 00:20:46.525438 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.528262 containerd[1462]: time="2025-11-08T00:20:46.527594047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ddc7488bf-2mjf4,Uid:5b0f2d27-dd39-4141-86fd-d9afc7136aed,Namespace:calico-system,Attempt:0,}" Nov 8 00:20:46.552987 containerd[1462]: time="2025-11-08T00:20:46.552720627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:46.552987 containerd[1462]: time="2025-11-08T00:20:46.552788416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:46.552987 containerd[1462]: time="2025-11-08T00:20:46.552802111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:46.552987 containerd[1462]: time="2025-11-08T00:20:46.552941496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:46.572013 systemd[1]: Started cri-containerd-9337b2bf3523e56698c4f5d1e4c6ddf316776581f988d7aaa69bdc8b85839a17.scope - libcontainer container 9337b2bf3523e56698c4f5d1e4c6ddf316776581f988d7aaa69bdc8b85839a17. Nov 8 00:20:46.578453 kubelet[2509]: E1108 00:20:46.578378 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:46.580078 containerd[1462]: time="2025-11-08T00:20:46.579944223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hqdfk,Uid:666165e7-10ad-460e-96f6-573e5f418d1c,Namespace:calico-system,Attempt:0,}" Nov 8 00:20:46.607757 containerd[1462]: time="2025-11-08T00:20:46.607659461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:46.607757 containerd[1462]: time="2025-11-08T00:20:46.607714716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:46.607757 containerd[1462]: time="2025-11-08T00:20:46.607733401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:46.607958 containerd[1462]: time="2025-11-08T00:20:46.607819895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:46.613019 containerd[1462]: time="2025-11-08T00:20:46.612974188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ddc7488bf-2mjf4,Uid:5b0f2d27-dd39-4141-86fd-d9afc7136aed,Namespace:calico-system,Attempt:0,} returns sandbox id \"9337b2bf3523e56698c4f5d1e4c6ddf316776581f988d7aaa69bdc8b85839a17\"" Nov 8 00:20:46.613900 kubelet[2509]: E1108 00:20:46.613710 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:46.614803 containerd[1462]: time="2025-11-08T00:20:46.614565605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:20:46.618740 kubelet[2509]: E1108 00:20:46.618719 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.618799 kubelet[2509]: W1108 00:20:46.618748 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.618799 kubelet[2509]: E1108 00:20:46.618766 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.619207 kubelet[2509]: E1108 00:20:46.619179 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.619207 kubelet[2509]: W1108 00:20:46.619202 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.619314 kubelet[2509]: E1108 00:20:46.619213 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.619611 kubelet[2509]: E1108 00:20:46.619595 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.619695 kubelet[2509]: W1108 00:20:46.619678 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.619695 kubelet[2509]: E1108 00:20:46.619694 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.620282 kubelet[2509]: E1108 00:20:46.620156 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.620282 kubelet[2509]: W1108 00:20:46.620168 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.620282 kubelet[2509]: E1108 00:20:46.620177 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.620828 kubelet[2509]: E1108 00:20:46.620722 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.620828 kubelet[2509]: W1108 00:20:46.620817 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.620926 kubelet[2509]: E1108 00:20:46.620838 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.621285 kubelet[2509]: E1108 00:20:46.621271 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.621391 kubelet[2509]: W1108 00:20:46.621357 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.621391 kubelet[2509]: E1108 00:20:46.621392 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.621944 kubelet[2509]: E1108 00:20:46.621915 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.621944 kubelet[2509]: W1108 00:20:46.621943 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.622019 kubelet[2509]: E1108 00:20:46.621954 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.622445 kubelet[2509]: E1108 00:20:46.622422 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.622445 kubelet[2509]: W1108 00:20:46.622435 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.622445 kubelet[2509]: E1108 00:20:46.622445 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.622964 kubelet[2509]: E1108 00:20:46.622949 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.622964 kubelet[2509]: W1108 00:20:46.622961 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.623018 kubelet[2509]: E1108 00:20:46.622971 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.623355 kubelet[2509]: E1108 00:20:46.623334 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.623355 kubelet[2509]: W1108 00:20:46.623346 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.623406 kubelet[2509]: E1108 00:20:46.623356 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.624272 kubelet[2509]: E1108 00:20:46.624257 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.624272 kubelet[2509]: W1108 00:20:46.624270 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.624340 kubelet[2509]: E1108 00:20:46.624280 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.624553 kubelet[2509]: E1108 00:20:46.624534 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.624553 kubelet[2509]: W1108 00:20:46.624547 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.624602 kubelet[2509]: E1108 00:20:46.624558 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.624789 kubelet[2509]: E1108 00:20:46.624776 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.624811 kubelet[2509]: W1108 00:20:46.624788 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.624811 kubelet[2509]: E1108 00:20:46.624798 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.625072 kubelet[2509]: E1108 00:20:46.625046 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.625072 kubelet[2509]: W1108 00:20:46.625069 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.625143 kubelet[2509]: E1108 00:20:46.625093 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.625351 kubelet[2509]: E1108 00:20:46.625334 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.625351 kubelet[2509]: W1108 00:20:46.625346 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.625417 kubelet[2509]: E1108 00:20:46.625356 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.625615 kubelet[2509]: E1108 00:20:46.625568 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.625615 kubelet[2509]: W1108 00:20:46.625583 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.625615 kubelet[2509]: E1108 00:20:46.625595 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.626248 kubelet[2509]: E1108 00:20:46.626112 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.626248 kubelet[2509]: W1108 00:20:46.626124 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.626248 kubelet[2509]: E1108 00:20:46.626134 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.626543 kubelet[2509]: E1108 00:20:46.626421 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.626543 kubelet[2509]: W1108 00:20:46.626433 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.626543 kubelet[2509]: E1108 00:20:46.626444 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.626816 kubelet[2509]: E1108 00:20:46.626710 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.627245 kubelet[2509]: W1108 00:20:46.627174 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.627245 kubelet[2509]: E1108 00:20:46.627192 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.627768 kubelet[2509]: E1108 00:20:46.627682 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.627768 kubelet[2509]: W1108 00:20:46.627708 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.627768 kubelet[2509]: E1108 00:20:46.627719 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.628208 kubelet[2509]: E1108 00:20:46.628127 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.628208 kubelet[2509]: W1108 00:20:46.628138 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.628208 kubelet[2509]: E1108 00:20:46.628148 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.628898 kubelet[2509]: E1108 00:20:46.628660 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.628898 kubelet[2509]: W1108 00:20:46.628671 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.628898 kubelet[2509]: E1108 00:20:46.628681 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.629213 kubelet[2509]: E1108 00:20:46.629201 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.629311 kubelet[2509]: W1108 00:20:46.629258 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.629311 kubelet[2509]: E1108 00:20:46.629271 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.629994 kubelet[2509]: E1108 00:20:46.629577 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.629994 kubelet[2509]: W1108 00:20:46.629588 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.629994 kubelet[2509]: E1108 00:20:46.629600 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.630480 kubelet[2509]: E1108 00:20:46.630460 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.630715 kubelet[2509]: W1108 00:20:46.630550 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.630715 kubelet[2509]: E1108 00:20:46.630681 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.637464 kubelet[2509]: E1108 00:20:46.637437 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:46.637464 kubelet[2509]: W1108 00:20:46.637454 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:46.637464 kubelet[2509]: E1108 00:20:46.637473 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:46.638034 systemd[1]: Started cri-containerd-9fd523e248bcdcf3518065547e702625791007d9f95331b85df2f791663e5c25.scope - libcontainer container 9fd523e248bcdcf3518065547e702625791007d9f95331b85df2f791663e5c25. Nov 8 00:20:46.661689 containerd[1462]: time="2025-11-08T00:20:46.661631131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hqdfk,Uid:666165e7-10ad-460e-96f6-573e5f418d1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fd523e248bcdcf3518065547e702625791007d9f95331b85df2f791663e5c25\"" Nov 8 00:20:46.662548 kubelet[2509]: E1108 00:20:46.662471 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:48.171409 kubelet[2509]: E1108 00:20:48.171212 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:20:48.277193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599010468.mount: Deactivated successfully. Nov 8 00:20:48.990635 containerd[1462]: time="2025-11-08T00:20:48.990558329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:48.991259 containerd[1462]: time="2025-11-08T00:20:48.991197631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:20:48.992429 containerd[1462]: time="2025-11-08T00:20:48.992394679Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:48.995014 containerd[1462]: time="2025-11-08T00:20:48.994971009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:48.995596 containerd[1462]: time="2025-11-08T00:20:48.995564314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.380968701s" Nov 8 00:20:48.995629 containerd[1462]: time="2025-11-08T00:20:48.995595162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:20:48.996908 containerd[1462]: time="2025-11-08T00:20:48.996858405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:20:49.010692 containerd[1462]: time="2025-11-08T00:20:49.010641752Z" level=info msg="CreateContainer within sandbox \"9337b2bf3523e56698c4f5d1e4c6ddf316776581f988d7aaa69bdc8b85839a17\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:20:49.026586 containerd[1462]: time="2025-11-08T00:20:49.026539399Z" level=info msg="CreateContainer within sandbox \"9337b2bf3523e56698c4f5d1e4c6ddf316776581f988d7aaa69bdc8b85839a17\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b0a81bbd4c0f277c3344f8fbaf967a721aad0c18b7e23cf96e987b95ae14658\"" Nov 8 00:20:49.027156 containerd[1462]: time="2025-11-08T00:20:49.027127172Z" level=info msg="StartContainer for \"2b0a81bbd4c0f277c3344f8fbaf967a721aad0c18b7e23cf96e987b95ae14658\"" Nov 8 00:20:49.062013 systemd[1]: Started cri-containerd-2b0a81bbd4c0f277c3344f8fbaf967a721aad0c18b7e23cf96e987b95ae14658.scope - libcontainer container 2b0a81bbd4c0f277c3344f8fbaf967a721aad0c18b7e23cf96e987b95ae14658. Nov 8 00:20:49.154549 containerd[1462]: time="2025-11-08T00:20:49.154474324Z" level=info msg="StartContainer for \"2b0a81bbd4c0f277c3344f8fbaf967a721aad0c18b7e23cf96e987b95ae14658\" returns successfully" Nov 8 00:20:49.246452 kubelet[2509]: E1108 00:20:49.246316 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:49.276726 kubelet[2509]: I1108 00:20:49.276646 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7ddc7488bf-2mjf4" podStartSLOduration=0.894296056 podStartE2EDuration="3.276624304s" podCreationTimestamp="2025-11-08 00:20:46 +0000 UTC" firstStartedPulling="2025-11-08 00:20:46.614269383 +0000 UTC m=+23.562663837" lastFinishedPulling="2025-11-08 00:20:48.996597631 +0000 UTC m=+25.944992085" observedRunningTime="2025-11-08 00:20:49.276158462 +0000 UTC m=+26.224552916" watchObservedRunningTime="2025-11-08 00:20:49.276624304 +0000 UTC m=+26.225018758" Nov 8 00:20:49.305961 kubelet[2509]: E1108 00:20:49.305900 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.305961 kubelet[2509]: W1108 00:20:49.305933 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.305961 kubelet[2509]: E1108 00:20:49.305975 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.306236 kubelet[2509]: E1108 00:20:49.306209 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.306272 kubelet[2509]: W1108 00:20:49.306240 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.306272 kubelet[2509]: E1108 00:20:49.306252 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.306529 kubelet[2509]: E1108 00:20:49.306506 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.306569 kubelet[2509]: W1108 00:20:49.306531 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.306569 kubelet[2509]: E1108 00:20:49.306541 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.306982 kubelet[2509]: E1108 00:20:49.306956 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.306982 kubelet[2509]: W1108 00:20:49.306974 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.307065 kubelet[2509]: E1108 00:20:49.306987 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.308221 kubelet[2509]: E1108 00:20:49.308193 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.308221 kubelet[2509]: W1108 00:20:49.308210 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.308221 kubelet[2509]: E1108 00:20:49.308220 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.308568 kubelet[2509]: E1108 00:20:49.308539 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.308568 kubelet[2509]: W1108 00:20:49.308558 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.308568 kubelet[2509]: E1108 00:20:49.308569 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.312886 kubelet[2509]: E1108 00:20:49.309952 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.312886 kubelet[2509]: W1108 00:20:49.309971 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.312886 kubelet[2509]: E1108 00:20:49.309983 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.312886 kubelet[2509]: E1108 00:20:49.311113 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.312886 kubelet[2509]: W1108 00:20:49.311124 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.312886 kubelet[2509]: E1108 00:20:49.311134 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.312886 kubelet[2509]: E1108 00:20:49.311372 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.312886 kubelet[2509]: W1108 00:20:49.311380 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.312886 kubelet[2509]: E1108 00:20:49.311389 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.312886 kubelet[2509]: E1108 00:20:49.311626 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.313182 kubelet[2509]: W1108 00:20:49.311635 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.313182 kubelet[2509]: E1108 00:20:49.311646 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.313182 kubelet[2509]: E1108 00:20:49.311884 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.313182 kubelet[2509]: W1108 00:20:49.311893 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.313182 kubelet[2509]: E1108 00:20:49.311903 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.313182 kubelet[2509]: E1108 00:20:49.312757 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.313182 kubelet[2509]: W1108 00:20:49.312767 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.313182 kubelet[2509]: E1108 00:20:49.312778 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.313182 kubelet[2509]: E1108 00:20:49.313069 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.313182 kubelet[2509]: W1108 00:20:49.313079 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.313409 kubelet[2509]: E1108 00:20:49.313088 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.313409 kubelet[2509]: E1108 00:20:49.313303 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.313409 kubelet[2509]: W1108 00:20:49.313312 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.313409 kubelet[2509]: E1108 00:20:49.313321 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.314130 kubelet[2509]: E1108 00:20:49.314103 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.314130 kubelet[2509]: W1108 00:20:49.314121 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.314130 kubelet[2509]: E1108 00:20:49.314131 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.339933 kubelet[2509]: E1108 00:20:49.338853 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.339933 kubelet[2509]: W1108 00:20:49.339919 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.339933 kubelet[2509]: E1108 00:20:49.339942 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.340396 kubelet[2509]: E1108 00:20:49.340378 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.340396 kubelet[2509]: W1108 00:20:49.340392 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.340457 kubelet[2509]: E1108 00:20:49.340402 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.340725 kubelet[2509]: E1108 00:20:49.340700 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.340725 kubelet[2509]: W1108 00:20:49.340718 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.340781 kubelet[2509]: E1108 00:20:49.340727 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.342505 kubelet[2509]: E1108 00:20:49.341102 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.342505 kubelet[2509]: W1108 00:20:49.341118 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.342505 kubelet[2509]: E1108 00:20:49.341128 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.342505 kubelet[2509]: E1108 00:20:49.341381 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.342505 kubelet[2509]: W1108 00:20:49.341390 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.342505 kubelet[2509]: E1108 00:20:49.341398 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.342505 kubelet[2509]: E1108 00:20:49.341640 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.342505 kubelet[2509]: W1108 00:20:49.341649 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.342505 kubelet[2509]: E1108 00:20:49.341657 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.342505 kubelet[2509]: E1108 00:20:49.342036 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.342772 kubelet[2509]: W1108 00:20:49.342062 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.342772 kubelet[2509]: E1108 00:20:49.342098 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.342833 kubelet[2509]: E1108 00:20:49.342787 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.342833 kubelet[2509]: W1108 00:20:49.342797 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.342833 kubelet[2509]: E1108 00:20:49.342806 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.343038 kubelet[2509]: E1108 00:20:49.343013 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.343038 kubelet[2509]: W1108 00:20:49.343031 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.343038 kubelet[2509]: E1108 00:20:49.343040 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.343302 kubelet[2509]: E1108 00:20:49.343278 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.343302 kubelet[2509]: W1108 00:20:49.343293 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.343302 kubelet[2509]: E1108 00:20:49.343301 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.343544 kubelet[2509]: E1108 00:20:49.343522 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.343544 kubelet[2509]: W1108 00:20:49.343536 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.343544 kubelet[2509]: E1108 00:20:49.343545 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.343823 kubelet[2509]: E1108 00:20:49.343788 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.343823 kubelet[2509]: W1108 00:20:49.343815 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.343823 kubelet[2509]: E1108 00:20:49.343824 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.344124 kubelet[2509]: E1108 00:20:49.344101 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.344124 kubelet[2509]: W1108 00:20:49.344116 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.344124 kubelet[2509]: E1108 00:20:49.344124 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.344534 kubelet[2509]: E1108 00:20:49.344509 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.344534 kubelet[2509]: W1108 00:20:49.344523 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.344534 kubelet[2509]: E1108 00:20:49.344532 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.344744 kubelet[2509]: E1108 00:20:49.344721 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.345145 kubelet[2509]: W1108 00:20:49.344915 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.345145 kubelet[2509]: E1108 00:20:49.344934 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.345302 kubelet[2509]: E1108 00:20:49.345241 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.345302 kubelet[2509]: W1108 00:20:49.345294 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.345359 kubelet[2509]: E1108 00:20:49.345304 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.345732 kubelet[2509]: E1108 00:20:49.345678 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.345732 kubelet[2509]: W1108 00:20:49.345722 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.345732 kubelet[2509]: E1108 00:20:49.345732 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:49.346175 kubelet[2509]: E1108 00:20:49.346150 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:49.346175 kubelet[2509]: W1108 00:20:49.346167 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:49.346319 kubelet[2509]: E1108 00:20:49.346177 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.170213 kubelet[2509]: E1108 00:20:50.170138 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:20:50.247202 kubelet[2509]: I1108 00:20:50.247155 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:20:50.247671 kubelet[2509]: E1108 00:20:50.247525 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:50.319927 kubelet[2509]: E1108 00:20:50.319892 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.319927 kubelet[2509]: W1108 00:20:50.319915 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.319927 kubelet[2509]: E1108 00:20:50.319938 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.320295 kubelet[2509]: E1108 00:20:50.320262 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.320295 kubelet[2509]: W1108 00:20:50.320274 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.320295 kubelet[2509]: E1108 00:20:50.320283 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.320546 kubelet[2509]: E1108 00:20:50.320524 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.320546 kubelet[2509]: W1108 00:20:50.320533 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.320546 kubelet[2509]: E1108 00:20:50.320542 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.320767 kubelet[2509]: E1108 00:20:50.320753 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.320767 kubelet[2509]: W1108 00:20:50.320763 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.320836 kubelet[2509]: E1108 00:20:50.320771 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.321006 kubelet[2509]: E1108 00:20:50.320991 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.321006 kubelet[2509]: W1108 00:20:50.321001 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.321006 kubelet[2509]: E1108 00:20:50.321008 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.321214 kubelet[2509]: E1108 00:20:50.321199 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.321214 kubelet[2509]: W1108 00:20:50.321208 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.321214 kubelet[2509]: E1108 00:20:50.321216 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.321408 kubelet[2509]: E1108 00:20:50.321394 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.321408 kubelet[2509]: W1108 00:20:50.321403 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.321408 kubelet[2509]: E1108 00:20:50.321410 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.321604 kubelet[2509]: E1108 00:20:50.321590 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.321604 kubelet[2509]: W1108 00:20:50.321598 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.321604 kubelet[2509]: E1108 00:20:50.321605 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.321801 kubelet[2509]: E1108 00:20:50.321786 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.321801 kubelet[2509]: W1108 00:20:50.321795 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.321801 kubelet[2509]: E1108 00:20:50.321802 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.322041 kubelet[2509]: E1108 00:20:50.321999 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.322041 kubelet[2509]: W1108 00:20:50.322008 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.322041 kubelet[2509]: E1108 00:20:50.322016 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.322219 kubelet[2509]: E1108 00:20:50.322205 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.322219 kubelet[2509]: W1108 00:20:50.322214 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.322294 kubelet[2509]: E1108 00:20:50.322222 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.322475 kubelet[2509]: E1108 00:20:50.322459 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.322475 kubelet[2509]: W1108 00:20:50.322470 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.322558 kubelet[2509]: E1108 00:20:50.322481 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.322695 kubelet[2509]: E1108 00:20:50.322679 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.322695 kubelet[2509]: W1108 00:20:50.322691 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.322770 kubelet[2509]: E1108 00:20:50.322701 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.322944 kubelet[2509]: E1108 00:20:50.322930 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.322944 kubelet[2509]: W1108 00:20:50.322940 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.323042 kubelet[2509]: E1108 00:20:50.322948 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.323188 kubelet[2509]: E1108 00:20:50.323174 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.323188 kubelet[2509]: W1108 00:20:50.323184 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.323245 kubelet[2509]: E1108 00:20:50.323192 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.346740 kubelet[2509]: E1108 00:20:50.346701 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.346740 kubelet[2509]: W1108 00:20:50.346726 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.346841 kubelet[2509]: E1108 00:20:50.346749 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.347102 kubelet[2509]: E1108 00:20:50.347075 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.347102 kubelet[2509]: W1108 00:20:50.347089 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.347102 kubelet[2509]: E1108 00:20:50.347099 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.347384 kubelet[2509]: E1108 00:20:50.347362 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.347384 kubelet[2509]: W1108 00:20:50.347375 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.347384 kubelet[2509]: E1108 00:20:50.347384 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.347745 kubelet[2509]: E1108 00:20:50.347718 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.347745 kubelet[2509]: W1108 00:20:50.347739 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.347843 kubelet[2509]: E1108 00:20:50.347757 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.348097 kubelet[2509]: E1108 00:20:50.348077 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.348097 kubelet[2509]: W1108 00:20:50.348090 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.348187 kubelet[2509]: E1108 00:20:50.348101 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.348346 kubelet[2509]: E1108 00:20:50.348318 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.348390 kubelet[2509]: W1108 00:20:50.348347 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.348390 kubelet[2509]: E1108 00:20:50.348361 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.348615 kubelet[2509]: E1108 00:20:50.348596 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.348615 kubelet[2509]: W1108 00:20:50.348608 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.348693 kubelet[2509]: E1108 00:20:50.348619 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.348894 kubelet[2509]: E1108 00:20:50.348858 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.348894 kubelet[2509]: W1108 00:20:50.348890 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.348989 kubelet[2509]: E1108 00:20:50.348912 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.349171 kubelet[2509]: E1108 00:20:50.349153 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.349171 kubelet[2509]: W1108 00:20:50.349164 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.349244 kubelet[2509]: E1108 00:20:50.349175 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.349692 kubelet[2509]: E1108 00:20:50.349673 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.350025 kubelet[2509]: W1108 00:20:50.349759 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.350025 kubelet[2509]: E1108 00:20:50.349776 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.350219 kubelet[2509]: E1108 00:20:50.350205 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.350273 kubelet[2509]: W1108 00:20:50.350262 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.350338 kubelet[2509]: E1108 00:20:50.350326 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.352874 kubelet[2509]: E1108 00:20:50.352835 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.352921 kubelet[2509]: W1108 00:20:50.352887 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.352921 kubelet[2509]: E1108 00:20:50.352906 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.353206 kubelet[2509]: E1108 00:20:50.353191 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.353206 kubelet[2509]: W1108 00:20:50.353203 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.353264 kubelet[2509]: E1108 00:20:50.353215 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.353741 kubelet[2509]: E1108 00:20:50.353720 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.353741 kubelet[2509]: W1108 00:20:50.353736 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.353834 kubelet[2509]: E1108 00:20:50.353749 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.354082 kubelet[2509]: E1108 00:20:50.354063 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.354082 kubelet[2509]: W1108 00:20:50.354076 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.354160 kubelet[2509]: E1108 00:20:50.354118 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.354409 kubelet[2509]: E1108 00:20:50.354390 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.354409 kubelet[2509]: W1108 00:20:50.354402 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.354494 kubelet[2509]: E1108 00:20:50.354414 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.354674 kubelet[2509]: E1108 00:20:50.354656 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.354674 kubelet[2509]: W1108 00:20:50.354668 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.354751 kubelet[2509]: E1108 00:20:50.354679 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.355178 kubelet[2509]: E1108 00:20:50.355158 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:20:50.355178 kubelet[2509]: W1108 00:20:50.355171 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:20:50.355252 kubelet[2509]: E1108 00:20:50.355183 2509 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:20:50.514155 containerd[1462]: time="2025-11-08T00:20:50.513984013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:50.514900 containerd[1462]: time="2025-11-08T00:20:50.514850884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:20:50.516713 containerd[1462]: time="2025-11-08T00:20:50.516185019Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:50.519144 containerd[1462]: time="2025-11-08T00:20:50.519078417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:50.519673 containerd[1462]: time="2025-11-08T00:20:50.519639840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.52273681s" Nov 8 00:20:50.519730 containerd[1462]: time="2025-11-08T00:20:50.519674326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:20:50.524921 containerd[1462]: time="2025-11-08T00:20:50.524888788Z" level=info msg="CreateContainer within sandbox \"9fd523e248bcdcf3518065547e702625791007d9f95331b85df2f791663e5c25\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:20:50.544034 containerd[1462]: time="2025-11-08T00:20:50.543984169Z" level=info msg="CreateContainer within sandbox \"9fd523e248bcdcf3518065547e702625791007d9f95331b85df2f791663e5c25\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aecd5beadb098c0a381029b63e7b2ee2d30095b8279a309502654326c03c50f6\"" Nov 8 00:20:50.544938 containerd[1462]: time="2025-11-08T00:20:50.544856630Z" level=info msg="StartContainer for \"aecd5beadb098c0a381029b63e7b2ee2d30095b8279a309502654326c03c50f6\"" Nov 8 00:20:50.580335 systemd[1]: Started cri-containerd-aecd5beadb098c0a381029b63e7b2ee2d30095b8279a309502654326c03c50f6.scope - libcontainer container aecd5beadb098c0a381029b63e7b2ee2d30095b8279a309502654326c03c50f6. Nov 8 00:20:50.624783 containerd[1462]: time="2025-11-08T00:20:50.624734704Z" level=info msg="StartContainer for \"aecd5beadb098c0a381029b63e7b2ee2d30095b8279a309502654326c03c50f6\" returns successfully" Nov 8 00:20:50.647007 systemd[1]: cri-containerd-aecd5beadb098c0a381029b63e7b2ee2d30095b8279a309502654326c03c50f6.scope: Deactivated successfully. Nov 8 00:20:50.913902 containerd[1462]: time="2025-11-08T00:20:50.911043689Z" level=info msg="shim disconnected" id=aecd5beadb098c0a381029b63e7b2ee2d30095b8279a309502654326c03c50f6 namespace=k8s.io Nov 8 00:20:50.914156 containerd[1462]: time="2025-11-08T00:20:50.913909334Z" level=warning msg="cleaning up after shim disconnected" id=aecd5beadb098c0a381029b63e7b2ee2d30095b8279a309502654326c03c50f6 namespace=k8s.io Nov 8 00:20:50.914156 containerd[1462]: time="2025-11-08T00:20:50.913931657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:20:51.003069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aecd5beadb098c0a381029b63e7b2ee2d30095b8279a309502654326c03c50f6-rootfs.mount: Deactivated successfully. Nov 8 00:20:51.251045 kubelet[2509]: E1108 00:20:51.250898 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:51.251905 containerd[1462]: time="2025-11-08T00:20:51.251851872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:20:52.171174 kubelet[2509]: E1108 00:20:52.171084 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:20:54.172466 kubelet[2509]: E1108 00:20:54.170853 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:20:54.874006 containerd[1462]: time="2025-11-08T00:20:54.873911588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:54.874835 containerd[1462]: time="2025-11-08T00:20:54.874746166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:20:54.875926 containerd[1462]: time="2025-11-08T00:20:54.875883788Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:54.879249 containerd[1462]: time="2025-11-08T00:20:54.879178028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:54.880009 containerd[1462]: time="2025-11-08T00:20:54.879957792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.628039083s" Nov 8 00:20:54.880093 containerd[1462]: time="2025-11-08T00:20:54.880006574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:20:54.887285 containerd[1462]: time="2025-11-08T00:20:54.887179047Z" level=info msg="CreateContainer within sandbox \"9fd523e248bcdcf3518065547e702625791007d9f95331b85df2f791663e5c25\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:20:54.907938 containerd[1462]: time="2025-11-08T00:20:54.907881296Z" level=info msg="CreateContainer within sandbox \"9fd523e248bcdcf3518065547e702625791007d9f95331b85df2f791663e5c25\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a468079eec10debe18fe67dd06b968a8c26c01f12bed093c777f1edf9637a1f\"" Nov 8 00:20:54.908569 containerd[1462]: time="2025-11-08T00:20:54.908499775Z" level=info msg="StartContainer for \"7a468079eec10debe18fe67dd06b968a8c26c01f12bed093c777f1edf9637a1f\"" Nov 8 00:20:54.947110 systemd[1]: Started cri-containerd-7a468079eec10debe18fe67dd06b968a8c26c01f12bed093c777f1edf9637a1f.scope - libcontainer container 7a468079eec10debe18fe67dd06b968a8c26c01f12bed093c777f1edf9637a1f. Nov 8 00:20:55.141347 containerd[1462]: time="2025-11-08T00:20:55.141184720Z" level=info msg="StartContainer for \"7a468079eec10debe18fe67dd06b968a8c26c01f12bed093c777f1edf9637a1f\" returns successfully" Nov 8 00:20:55.262923 kubelet[2509]: E1108 00:20:55.262853 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:56.170142 kubelet[2509]: E1108 00:20:56.170098 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:20:56.265297 kubelet[2509]: E1108 00:20:56.265238 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:56.468020 kubelet[2509]: I1108 00:20:56.467802 2509 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:20:56.468230 systemd[1]: cri-containerd-7a468079eec10debe18fe67dd06b968a8c26c01f12bed093c777f1edf9637a1f.scope: Deactivated successfully. Nov 8 00:20:56.507917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a468079eec10debe18fe67dd06b968a8c26c01f12bed093c777f1edf9637a1f-rootfs.mount: Deactivated successfully. Nov 8 00:20:56.516162 containerd[1462]: time="2025-11-08T00:20:56.515762021Z" level=info msg="shim disconnected" id=7a468079eec10debe18fe67dd06b968a8c26c01f12bed093c777f1edf9637a1f namespace=k8s.io Nov 8 00:20:56.516162 containerd[1462]: time="2025-11-08T00:20:56.515827054Z" level=warning msg="cleaning up after shim disconnected" id=7a468079eec10debe18fe67dd06b968a8c26c01f12bed093c777f1edf9637a1f namespace=k8s.io Nov 8 00:20:56.516162 containerd[1462]: time="2025-11-08T00:20:56.515835741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:20:56.518381 systemd[1]: Created slice kubepods-besteffort-pod65117d06_024f_4bb6_a156_fce351c46adb.slice - libcontainer container kubepods-besteffort-pod65117d06_024f_4bb6_a156_fce351c46adb.slice. Nov 8 00:20:56.532398 systemd[1]: Created slice kubepods-besteffort-pode3df79a4_2d69_4d1b_a3d8_5080134a94f0.slice - libcontainer container kubepods-besteffort-pode3df79a4_2d69_4d1b_a3d8_5080134a94f0.slice. Nov 8 00:20:56.538802 systemd[1]: Created slice kubepods-burstable-pod6095f60b_9a5f_4061_ba74_c474c415b963.slice - libcontainer container kubepods-burstable-pod6095f60b_9a5f_4061_ba74_c474c415b963.slice. Nov 8 00:20:56.540682 kubelet[2509]: I1108 00:20:56.540098 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqppx\" (UniqueName: \"kubernetes.io/projected/e3df79a4-2d69-4d1b-a3d8-5080134a94f0-kube-api-access-tqppx\") pod \"calico-kube-controllers-564bf5b6db-26fpn\" (UID: \"e3df79a4-2d69-4d1b-a3d8-5080134a94f0\") " pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" Nov 8 00:20:56.541368 kubelet[2509]: I1108 00:20:56.541222 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbzp\" (UniqueName: \"kubernetes.io/projected/6095f60b-9a5f-4061-ba74-c474c415b963-kube-api-access-8rbzp\") pod \"coredns-66bc5c9577-6z69l\" (UID: \"6095f60b-9a5f-4061-ba74-c474c415b963\") " pod="kube-system/coredns-66bc5c9577-6z69l" Nov 8 00:20:56.541368 kubelet[2509]: I1108 00:20:56.541249 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/779024e8-f065-402a-9618-c2d1616b455b-calico-apiserver-certs\") pod \"calico-apiserver-6477c478b5-xfnk2\" (UID: \"779024e8-f065-402a-9618-c2d1616b455b\") " pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" Nov 8 00:20:56.541368 kubelet[2509]: I1108 00:20:56.541262 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p848\" (UniqueName: \"kubernetes.io/projected/9e0d5390-5a60-44e9-a40d-847919eb2c6d-kube-api-access-8p848\") pod \"coredns-66bc5c9577-8f6fz\" (UID: \"9e0d5390-5a60-44e9-a40d-847919eb2c6d\") " pod="kube-system/coredns-66bc5c9577-8f6fz" Nov 8 00:20:56.541368 kubelet[2509]: I1108 00:20:56.541278 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n726b\" (UniqueName: \"kubernetes.io/projected/6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d-kube-api-access-n726b\") pod \"goldmane-7c778bb748-4wpxj\" (UID: \"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d\") " pod="calico-system/goldmane-7c778bb748-4wpxj" Nov 8 00:20:56.541368 kubelet[2509]: I1108 00:20:56.541291 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh8lh\" (UniqueName: \"kubernetes.io/projected/48366bf3-7c5b-44ee-9949-cb0f73b78d3c-kube-api-access-sh8lh\") pod \"calico-apiserver-6477c478b5-v47ws\" (UID: \"48366bf3-7c5b-44ee-9949-cb0f73b78d3c\") " pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" Nov 8 00:20:56.541550 kubelet[2509]: I1108 00:20:56.541305 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e0d5390-5a60-44e9-a40d-847919eb2c6d-config-volume\") pod \"coredns-66bc5c9577-8f6fz\" (UID: \"9e0d5390-5a60-44e9-a40d-847919eb2c6d\") " pod="kube-system/coredns-66bc5c9577-8f6fz" Nov 8 00:20:56.541550 kubelet[2509]: I1108 00:20:56.541333 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjf8j\" (UniqueName: \"kubernetes.io/projected/779024e8-f065-402a-9618-c2d1616b455b-kube-api-access-fjf8j\") pod \"calico-apiserver-6477c478b5-xfnk2\" (UID: \"779024e8-f065-402a-9618-c2d1616b455b\") " pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" Nov 8 00:20:56.541550 kubelet[2509]: I1108 00:20:56.541388 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/48366bf3-7c5b-44ee-9949-cb0f73b78d3c-calico-apiserver-certs\") pod \"calico-apiserver-6477c478b5-v47ws\" (UID: \"48366bf3-7c5b-44ee-9949-cb0f73b78d3c\") " pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" Nov 8 00:20:56.541550 kubelet[2509]: I1108 00:20:56.541434 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65117d06-024f-4bb6-a156-fce351c46adb-whisker-backend-key-pair\") pod \"whisker-564f48998b-v7v74\" (UID: \"65117d06-024f-4bb6-a156-fce351c46adb\") " pod="calico-system/whisker-564f48998b-v7v74" Nov 8 00:20:56.541550 kubelet[2509]: I1108 00:20:56.541454 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g66tw\" (UniqueName: \"kubernetes.io/projected/65117d06-024f-4bb6-a156-fce351c46adb-kube-api-access-g66tw\") pod \"whisker-564f48998b-v7v74\" (UID: \"65117d06-024f-4bb6-a156-fce351c46adb\") " pod="calico-system/whisker-564f48998b-v7v74" Nov 8 00:20:56.542892 kubelet[2509]: I1108 00:20:56.541560 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65117d06-024f-4bb6-a156-fce351c46adb-whisker-ca-bundle\") pod \"whisker-564f48998b-v7v74\" (UID: \"65117d06-024f-4bb6-a156-fce351c46adb\") " pod="calico-system/whisker-564f48998b-v7v74" Nov 8 00:20:56.542892 kubelet[2509]: I1108 00:20:56.542133 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3df79a4-2d69-4d1b-a3d8-5080134a94f0-tigera-ca-bundle\") pod \"calico-kube-controllers-564bf5b6db-26fpn\" (UID: \"e3df79a4-2d69-4d1b-a3d8-5080134a94f0\") " pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" Nov 8 00:20:56.542892 kubelet[2509]: I1108 00:20:56.542333 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6095f60b-9a5f-4061-ba74-c474c415b963-config-volume\") pod \"coredns-66bc5c9577-6z69l\" (UID: \"6095f60b-9a5f-4061-ba74-c474c415b963\") " pod="kube-system/coredns-66bc5c9577-6z69l" Nov 8 00:20:56.542892 kubelet[2509]: I1108 00:20:56.542383 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d-config\") pod \"goldmane-7c778bb748-4wpxj\" (UID: \"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d\") " pod="calico-system/goldmane-7c778bb748-4wpxj" Nov 8 00:20:56.542892 kubelet[2509]: I1108 00:20:56.542403 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d-goldmane-key-pair\") pod \"goldmane-7c778bb748-4wpxj\" (UID: \"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d\") " pod="calico-system/goldmane-7c778bb748-4wpxj" Nov 8 00:20:56.543090 kubelet[2509]: I1108 00:20:56.542436 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-4wpxj\" (UID: \"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d\") " pod="calico-system/goldmane-7c778bb748-4wpxj" Nov 8 00:20:56.553033 systemd[1]: Created slice kubepods-burstable-pod9e0d5390_5a60_44e9_a40d_847919eb2c6d.slice - libcontainer container kubepods-burstable-pod9e0d5390_5a60_44e9_a40d_847919eb2c6d.slice. Nov 8 00:20:56.562431 systemd[1]: Created slice kubepods-besteffort-pod779024e8_f065_402a_9618_c2d1616b455b.slice - libcontainer container kubepods-besteffort-pod779024e8_f065_402a_9618_c2d1616b455b.slice. Nov 8 00:20:56.572924 systemd[1]: Created slice kubepods-besteffort-pod6d4ff612_553f_4b12_9b88_ad8ba2ea5f5d.slice - libcontainer container kubepods-besteffort-pod6d4ff612_553f_4b12_9b88_ad8ba2ea5f5d.slice. Nov 8 00:20:56.578627 systemd[1]: Created slice kubepods-besteffort-pod48366bf3_7c5b_44ee_9949_cb0f73b78d3c.slice - libcontainer container kubepods-besteffort-pod48366bf3_7c5b_44ee_9949_cb0f73b78d3c.slice. Nov 8 00:20:56.834712 containerd[1462]: time="2025-11-08T00:20:56.834524258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564f48998b-v7v74,Uid:65117d06-024f-4bb6-a156-fce351c46adb,Namespace:calico-system,Attempt:0,}" Nov 8 00:20:56.840909 containerd[1462]: time="2025-11-08T00:20:56.840851598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564bf5b6db-26fpn,Uid:e3df79a4-2d69-4d1b-a3d8-5080134a94f0,Namespace:calico-system,Attempt:0,}" Nov 8 00:20:56.846127 kubelet[2509]: E1108 00:20:56.846059 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:56.846916 containerd[1462]: time="2025-11-08T00:20:56.846855767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6z69l,Uid:6095f60b-9a5f-4061-ba74-c474c415b963,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:56.861493 kubelet[2509]: E1108 00:20:56.861458 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:56.862704 containerd[1462]: time="2025-11-08T00:20:56.862663816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8f6fz,Uid:9e0d5390-5a60-44e9-a40d-847919eb2c6d,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:56.873536 containerd[1462]: time="2025-11-08T00:20:56.873478677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6477c478b5-xfnk2,Uid:779024e8-f065-402a-9618-c2d1616b455b,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:20:56.880176 containerd[1462]: time="2025-11-08T00:20:56.880120081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4wpxj,Uid:6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d,Namespace:calico-system,Attempt:0,}" Nov 8 00:20:56.887293 containerd[1462]: time="2025-11-08T00:20:56.887245558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6477c478b5-v47ws,Uid:48366bf3-7c5b-44ee-9949-cb0f73b78d3c,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:20:56.990816 containerd[1462]: time="2025-11-08T00:20:56.990642771Z" level=error msg="Failed to destroy network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:56.991537 containerd[1462]: time="2025-11-08T00:20:56.991379053Z" level=error msg="encountered an error cleaning up failed sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:56.991537 containerd[1462]: time="2025-11-08T00:20:56.991433135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564bf5b6db-26fpn,Uid:e3df79a4-2d69-4d1b-a3d8-5080134a94f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:56.991765 kubelet[2509]: E1108 00:20:56.991712 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:56.991854 kubelet[2509]: E1108 00:20:56.991802 2509 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" Nov 8 00:20:56.991854 kubelet[2509]: E1108 00:20:56.991823 2509 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" Nov 8 00:20:56.991954 kubelet[2509]: E1108 00:20:56.991894 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564bf5b6db-26fpn_calico-system(e3df79a4-2d69-4d1b-a3d8-5080134a94f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564bf5b6db-26fpn_calico-system(e3df79a4-2d69-4d1b-a3d8-5080134a94f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" podUID="e3df79a4-2d69-4d1b-a3d8-5080134a94f0" Nov 8 00:20:57.004115 containerd[1462]: time="2025-11-08T00:20:57.004038890Z" level=error msg="Failed to destroy network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.004828 containerd[1462]: time="2025-11-08T00:20:57.004679180Z" level=error msg="encountered an error cleaning up failed sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.004828 containerd[1462]: time="2025-11-08T00:20:57.004735086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564f48998b-v7v74,Uid:65117d06-024f-4bb6-a156-fce351c46adb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.005160 kubelet[2509]: E1108 00:20:57.005079 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.005227 kubelet[2509]: E1108 00:20:57.005159 2509 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-564f48998b-v7v74" Nov 8 00:20:57.005227 kubelet[2509]: E1108 00:20:57.005182 2509 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-564f48998b-v7v74" Nov 8 00:20:57.005287 kubelet[2509]: E1108 00:20:57.005239 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-564f48998b-v7v74_calico-system(65117d06-024f-4bb6-a156-fce351c46adb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-564f48998b-v7v74_calico-system(65117d06-024f-4bb6-a156-fce351c46adb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-564f48998b-v7v74" podUID="65117d06-024f-4bb6-a156-fce351c46adb" Nov 8 00:20:57.006581 containerd[1462]: time="2025-11-08T00:20:57.006507565Z" level=error msg="Failed to destroy network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.007009 containerd[1462]: time="2025-11-08T00:20:57.006975499Z" level=error msg="encountered an error cleaning up failed sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.007096 containerd[1462]: time="2025-11-08T00:20:57.007033028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6z69l,Uid:6095f60b-9a5f-4061-ba74-c474c415b963,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.007274 kubelet[2509]: E1108 00:20:57.007237 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.007573 kubelet[2509]: E1108 00:20:57.007547 2509 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6z69l" Nov 8 00:20:57.007614 kubelet[2509]: E1108 00:20:57.007572 2509 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6z69l" Nov 8 00:20:57.007643 kubelet[2509]: E1108 00:20:57.007623 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6z69l_kube-system(6095f60b-9a5f-4061-ba74-c474c415b963)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6z69l_kube-system(6095f60b-9a5f-4061-ba74-c474c415b963)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6z69l" podUID="6095f60b-9a5f-4061-ba74-c474c415b963" Nov 8 00:20:57.010780 containerd[1462]: time="2025-11-08T00:20:57.010645344Z" level=error msg="Failed to destroy network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.012026 containerd[1462]: time="2025-11-08T00:20:57.012001006Z" level=error msg="encountered an error cleaning up failed sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.012283 containerd[1462]: time="2025-11-08T00:20:57.012258783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8f6fz,Uid:9e0d5390-5a60-44e9-a40d-847919eb2c6d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.012800 kubelet[2509]: E1108 00:20:57.012760 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.013172 kubelet[2509]: E1108 00:20:57.012807 2509 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8f6fz" Nov 8 00:20:57.013172 kubelet[2509]: E1108 00:20:57.012825 2509 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8f6fz" Nov 8 00:20:57.013172 kubelet[2509]: E1108 00:20:57.012880 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8f6fz_kube-system(9e0d5390-5a60-44e9-a40d-847919eb2c6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8f6fz_kube-system(9e0d5390-5a60-44e9-a40d-847919eb2c6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8f6fz" podUID="9e0d5390-5a60-44e9-a40d-847919eb2c6d" Nov 8 00:20:57.041511 containerd[1462]: time="2025-11-08T00:20:57.041431781Z" level=error msg="Failed to destroy network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.042708 containerd[1462]: time="2025-11-08T00:20:57.042676373Z" level=error msg="encountered an error cleaning up failed sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.042771 containerd[1462]: time="2025-11-08T00:20:57.042736808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4wpxj,Uid:6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.043468 kubelet[2509]: E1108 00:20:57.043425 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.043586 kubelet[2509]: E1108 00:20:57.043570 2509 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4wpxj" Nov 8 00:20:57.043655 kubelet[2509]: E1108 00:20:57.043640 2509 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-4wpxj" Nov 8 00:20:57.043886 kubelet[2509]: E1108 00:20:57.043782 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-4wpxj_calico-system(6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-4wpxj_calico-system(6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-4wpxj" podUID="6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d" Nov 8 00:20:57.046774 containerd[1462]: time="2025-11-08T00:20:57.046566394Z" level=error msg="Failed to destroy network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.047466 containerd[1462]: time="2025-11-08T00:20:57.047190614Z" level=error msg="encountered an error cleaning up failed sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.047466 containerd[1462]: time="2025-11-08T00:20:57.047247271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6477c478b5-v47ws,Uid:48366bf3-7c5b-44ee-9949-cb0f73b78d3c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.047569 kubelet[2509]: E1108 00:20:57.047488 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.047569 kubelet[2509]: E1108 00:20:57.047516 2509 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" Nov 8 00:20:57.047569 kubelet[2509]: E1108 00:20:57.047531 2509 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" Nov 8 00:20:57.047678 kubelet[2509]: E1108 00:20:57.047565 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6477c478b5-v47ws_calico-apiserver(48366bf3-7c5b-44ee-9949-cb0f73b78d3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6477c478b5-v47ws_calico-apiserver(48366bf3-7c5b-44ee-9949-cb0f73b78d3c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" podUID="48366bf3-7c5b-44ee-9949-cb0f73b78d3c" Nov 8 00:20:57.048237 containerd[1462]: time="2025-11-08T00:20:57.048197768Z" level=error msg="Failed to destroy network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.048563 containerd[1462]: time="2025-11-08T00:20:57.048533080Z" level=error msg="encountered an error cleaning up failed sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.048604 containerd[1462]: time="2025-11-08T00:20:57.048574379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6477c478b5-xfnk2,Uid:779024e8-f065-402a-9618-c2d1616b455b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.048893 kubelet[2509]: E1108 00:20:57.048823 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.048968 kubelet[2509]: E1108 00:20:57.048912 2509 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" Nov 8 00:20:57.048968 kubelet[2509]: E1108 00:20:57.048935 2509 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" Nov 8 00:20:57.049028 kubelet[2509]: E1108 00:20:57.048998 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6477c478b5-xfnk2_calico-apiserver(779024e8-f065-402a-9618-c2d1616b455b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6477c478b5-xfnk2_calico-apiserver(779024e8-f065-402a-9618-c2d1616b455b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" podUID="779024e8-f065-402a-9618-c2d1616b455b" Nov 8 00:20:57.268429 kubelet[2509]: I1108 00:20:57.268385 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:20:57.269734 containerd[1462]: time="2025-11-08T00:20:57.269167295Z" level=info msg="StopPodSandbox for \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\"" Nov 8 00:20:57.269734 containerd[1462]: time="2025-11-08T00:20:57.269368686Z" level=info msg="Ensure that sandbox d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4 in task-service has been cleanup successfully" Nov 8 00:20:57.270064 kubelet[2509]: I1108 00:20:57.269314 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:20:57.270161 containerd[1462]: time="2025-11-08T00:20:57.269741019Z" level=info msg="StopPodSandbox for \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\"" Nov 8 00:20:57.270161 containerd[1462]: time="2025-11-08T00:20:57.269902135Z" level=info msg="Ensure that sandbox 341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26 in task-service has been cleanup successfully" Nov 8 00:20:57.271038 kubelet[2509]: I1108 00:20:57.271010 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:20:57.272718 containerd[1462]: time="2025-11-08T00:20:57.272295567Z" level=info msg="StopPodSandbox for \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\"" Nov 8 00:20:57.272718 containerd[1462]: time="2025-11-08T00:20:57.272480867Z" level=info msg="Ensure that sandbox a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f in task-service has been cleanup successfully" Nov 8 00:20:57.272821 kubelet[2509]: I1108 00:20:57.272509 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:20:57.273148 containerd[1462]: time="2025-11-08T00:20:57.273112491Z" level=info msg="StopPodSandbox for \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\"" Nov 8 00:20:57.273922 containerd[1462]: time="2025-11-08T00:20:57.273431764Z" level=info msg="Ensure that sandbox 94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02 in task-service has been cleanup successfully" Nov 8 00:20:57.275651 kubelet[2509]: I1108 00:20:57.275578 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:20:57.276759 containerd[1462]: time="2025-11-08T00:20:57.276332065Z" level=info msg="StopPodSandbox for \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\"" Nov 8 00:20:57.276759 containerd[1462]: time="2025-11-08T00:20:57.276538355Z" level=info msg="Ensure that sandbox ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a in task-service has been cleanup successfully" Nov 8 00:20:57.281402 kubelet[2509]: I1108 00:20:57.281358 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:20:57.284496 containerd[1462]: time="2025-11-08T00:20:57.284410791Z" level=info msg="StopPodSandbox for \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\"" Nov 8 00:20:57.285669 containerd[1462]: time="2025-11-08T00:20:57.285563630Z" level=info msg="Ensure that sandbox 3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21 in task-service has been cleanup successfully" Nov 8 00:20:57.290224 kubelet[2509]: I1108 00:20:57.290184 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:20:57.292777 containerd[1462]: time="2025-11-08T00:20:57.291791989Z" level=info msg="StopPodSandbox for \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\"" Nov 8 00:20:57.292777 containerd[1462]: time="2025-11-08T00:20:57.292022495Z" level=info msg="Ensure that sandbox 3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0 in task-service has been cleanup successfully" Nov 8 00:20:57.297199 kubelet[2509]: E1108 00:20:57.297024 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:20:57.305128 containerd[1462]: time="2025-11-08T00:20:57.300191191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:20:57.330396 containerd[1462]: time="2025-11-08T00:20:57.330321860Z" level=error msg="StopPodSandbox for \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\" failed" error="failed to destroy network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.330702 kubelet[2509]: E1108 00:20:57.330638 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:20:57.330765 kubelet[2509]: E1108 00:20:57.330716 2509 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4"} Nov 8 00:20:57.330821 kubelet[2509]: E1108 00:20:57.330790 2509 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48366bf3-7c5b-44ee-9949-cb0f73b78d3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:20:57.330925 kubelet[2509]: E1108 00:20:57.330830 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48366bf3-7c5b-44ee-9949-cb0f73b78d3c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" podUID="48366bf3-7c5b-44ee-9949-cb0f73b78d3c" Nov 8 00:20:57.336084 containerd[1462]: time="2025-11-08T00:20:57.336003005Z" level=error msg="StopPodSandbox for \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\" failed" error="failed to destroy network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.337886 kubelet[2509]: E1108 00:20:57.337090 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:20:57.337886 kubelet[2509]: E1108 00:20:57.337154 2509 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f"} Nov 8 00:20:57.337886 kubelet[2509]: E1108 00:20:57.337195 2509 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3df79a4-2d69-4d1b-a3d8-5080134a94f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:20:57.337886 kubelet[2509]: E1108 00:20:57.337231 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3df79a4-2d69-4d1b-a3d8-5080134a94f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" podUID="e3df79a4-2d69-4d1b-a3d8-5080134a94f0" Nov 8 00:20:57.343056 containerd[1462]: time="2025-11-08T00:20:57.342982185Z" level=error msg="StopPodSandbox for \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\" failed" error="failed to destroy network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.343352 kubelet[2509]: E1108 00:20:57.343298 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:20:57.343414 kubelet[2509]: E1108 00:20:57.343357 2509 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a"} Nov 8 00:20:57.343414 kubelet[2509]: E1108 00:20:57.343400 2509 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"779024e8-f065-402a-9618-c2d1616b455b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:20:57.343564 kubelet[2509]: E1108 00:20:57.343436 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"779024e8-f065-402a-9618-c2d1616b455b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" podUID="779024e8-f065-402a-9618-c2d1616b455b" Nov 8 00:20:57.359022 containerd[1462]: time="2025-11-08T00:20:57.358941242Z" level=error msg="StopPodSandbox for \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\" failed" error="failed to destroy network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.359755 kubelet[2509]: E1108 00:20:57.359493 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:20:57.359755 kubelet[2509]: E1108 00:20:57.359631 2509 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21"} Nov 8 00:20:57.359755 kubelet[2509]: E1108 00:20:57.359680 2509 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6095f60b-9a5f-4061-ba74-c474c415b963\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:20:57.359755 kubelet[2509]: E1108 00:20:57.359718 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6095f60b-9a5f-4061-ba74-c474c415b963\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6z69l" podUID="6095f60b-9a5f-4061-ba74-c474c415b963" Nov 8 00:20:57.360722 containerd[1462]: time="2025-11-08T00:20:57.360684987Z" level=error msg="StopPodSandbox for \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\" failed" error="failed to destroy network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.360904 kubelet[2509]: E1108 00:20:57.360829 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:20:57.360952 kubelet[2509]: E1108 00:20:57.360912 2509 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02"} Nov 8 00:20:57.360952 kubelet[2509]: E1108 00:20:57.360940 2509 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:20:57.361061 kubelet[2509]: E1108 00:20:57.360966 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-4wpxj" podUID="6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d" Nov 8 00:20:57.362783 containerd[1462]: time="2025-11-08T00:20:57.362744550Z" level=error msg="StopPodSandbox for \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\" failed" error="failed to destroy network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.363103 kubelet[2509]: E1108 00:20:57.363058 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:20:57.363103 kubelet[2509]: E1108 00:20:57.363103 2509 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26"} Nov 8 00:20:57.363217 kubelet[2509]: E1108 00:20:57.363129 2509 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e0d5390-5a60-44e9-a40d-847919eb2c6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:20:57.363217 kubelet[2509]: E1108 00:20:57.363154 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e0d5390-5a60-44e9-a40d-847919eb2c6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8f6fz" podUID="9e0d5390-5a60-44e9-a40d-847919eb2c6d" Nov 8 00:20:57.370457 containerd[1462]: time="2025-11-08T00:20:57.370397812Z" level=error msg="StopPodSandbox for \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\" failed" error="failed to destroy network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:57.370642 kubelet[2509]: E1108 00:20:57.370607 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:20:57.370718 kubelet[2509]: E1108 00:20:57.370650 2509 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0"} Nov 8 00:20:57.370718 kubelet[2509]: E1108 00:20:57.370681 2509 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65117d06-024f-4bb6-a156-fce351c46adb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:20:57.370885 kubelet[2509]: E1108 00:20:57.370714 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65117d06-024f-4bb6-a156-fce351c46adb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-564f48998b-v7v74" podUID="65117d06-024f-4bb6-a156-fce351c46adb" Nov 8 00:20:57.511266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0-shm.mount: Deactivated successfully. Nov 8 00:20:58.176587 systemd[1]: Created slice kubepods-besteffort-poda7e1a5e5_d1e7_4901_bce6_3563db023294.slice - libcontainer container kubepods-besteffort-poda7e1a5e5_d1e7_4901_bce6_3563db023294.slice. Nov 8 00:20:58.181835 containerd[1462]: time="2025-11-08T00:20:58.181795770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmjcb,Uid:a7e1a5e5-d1e7-4901-bce6-3563db023294,Namespace:calico-system,Attempt:0,}" Nov 8 00:20:58.244002 containerd[1462]: time="2025-11-08T00:20:58.243913031Z" level=error msg="Failed to destroy network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:58.244501 containerd[1462]: time="2025-11-08T00:20:58.244454864Z" level=error msg="encountered an error cleaning up failed sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:58.244551 containerd[1462]: time="2025-11-08T00:20:58.244519035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmjcb,Uid:a7e1a5e5-d1e7-4901-bce6-3563db023294,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:58.244906 kubelet[2509]: E1108 00:20:58.244817 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:58.245116 kubelet[2509]: E1108 00:20:58.244920 2509 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmjcb" Nov 8 00:20:58.245116 kubelet[2509]: E1108 00:20:58.244943 2509 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rmjcb" Nov 8 00:20:58.245116 kubelet[2509]: E1108 00:20:58.245004 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rmjcb_calico-system(a7e1a5e5-d1e7-4901-bce6-3563db023294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rmjcb_calico-system(a7e1a5e5-d1e7-4901-bce6-3563db023294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:20:58.246657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae-shm.mount: Deactivated successfully. Nov 8 00:20:58.298843 kubelet[2509]: I1108 00:20:58.298811 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:20:58.299534 containerd[1462]: time="2025-11-08T00:20:58.299490976Z" level=info msg="StopPodSandbox for \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\"" Nov 8 00:20:58.299786 containerd[1462]: time="2025-11-08T00:20:58.299753452Z" level=info msg="Ensure that sandbox 9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae in task-service has been cleanup successfully" Nov 8 00:20:58.330618 containerd[1462]: time="2025-11-08T00:20:58.330567109Z" level=error msg="StopPodSandbox for \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\" failed" error="failed to destroy network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:20:58.330899 kubelet[2509]: E1108 00:20:58.330837 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:20:58.330977 kubelet[2509]: E1108 00:20:58.330918 2509 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae"} Nov 8 00:20:58.330977 kubelet[2509]: E1108 00:20:58.330955 2509 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7e1a5e5-d1e7-4901-bce6-3563db023294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:20:58.331061 kubelet[2509]: E1108 00:20:58.330986 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7e1a5e5-d1e7-4901-bce6-3563db023294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:20:59.719812 kubelet[2509]: I1108 00:20:59.719751 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:20:59.720325 kubelet[2509]: E1108 00:20:59.720185 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:00.304139 kubelet[2509]: E1108 00:21:00.304087 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:04.371221 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:52200.service - OpenSSH per-connection server daemon (10.0.0.1:52200). Nov 8 00:21:04.428142 sshd[3782]: Accepted publickey for core from 10.0.0.1 port 52200 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:04.430975 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:04.438952 systemd-logind[1454]: New session 8 of user core. Nov 8 00:21:04.446037 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:21:04.611209 sshd[3782]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:04.615973 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:52200.service: Deactivated successfully. Nov 8 00:21:04.618554 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:21:04.619668 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:21:04.621359 systemd-logind[1454]: Removed session 8. Nov 8 00:21:06.208396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166954192.mount: Deactivated successfully. Nov 8 00:21:06.836726 containerd[1462]: time="2025-11-08T00:21:06.836630537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:06.837620 containerd[1462]: time="2025-11-08T00:21:06.837547016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:21:06.839688 containerd[1462]: time="2025-11-08T00:21:06.839641899Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:06.842199 containerd[1462]: time="2025-11-08T00:21:06.842115206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:06.842920 containerd[1462]: time="2025-11-08T00:21:06.842880440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.542597265s" Nov 8 00:21:06.842920 containerd[1462]: time="2025-11-08T00:21:06.842917731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:21:06.869304 containerd[1462]: time="2025-11-08T00:21:06.869228981Z" level=info msg="CreateContainer within sandbox \"9fd523e248bcdcf3518065547e702625791007d9f95331b85df2f791663e5c25\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:21:06.905567 containerd[1462]: time="2025-11-08T00:21:06.905500106Z" level=info msg="CreateContainer within sandbox \"9fd523e248bcdcf3518065547e702625791007d9f95331b85df2f791663e5c25\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ec4d31128c55c18c93f2027e72ec935fb2740cbd98f577f83e48622f5f897bcc\"" Nov 8 00:21:06.906439 containerd[1462]: time="2025-11-08T00:21:06.906409612Z" level=info msg="StartContainer for \"ec4d31128c55c18c93f2027e72ec935fb2740cbd98f577f83e48622f5f897bcc\"" Nov 8 00:21:06.991102 systemd[1]: Started cri-containerd-ec4d31128c55c18c93f2027e72ec935fb2740cbd98f577f83e48622f5f897bcc.scope - libcontainer container ec4d31128c55c18c93f2027e72ec935fb2740cbd98f577f83e48622f5f897bcc. Nov 8 00:21:07.291952 containerd[1462]: time="2025-11-08T00:21:07.291327112Z" level=info msg="StartContainer for \"ec4d31128c55c18c93f2027e72ec935fb2740cbd98f577f83e48622f5f897bcc\" returns successfully" Nov 8 00:21:07.302375 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:21:07.303318 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:21:07.321727 kubelet[2509]: E1108 00:21:07.321683 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:07.341350 kubelet[2509]: I1108 00:21:07.339583 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hqdfk" podStartSLOduration=1.159138435 podStartE2EDuration="21.339557905s" podCreationTimestamp="2025-11-08 00:20:46 +0000 UTC" firstStartedPulling="2025-11-08 00:20:46.663258606 +0000 UTC m=+23.611653050" lastFinishedPulling="2025-11-08 00:21:06.843678056 +0000 UTC m=+43.792072520" observedRunningTime="2025-11-08 00:21:07.33936413 +0000 UTC m=+44.287758574" watchObservedRunningTime="2025-11-08 00:21:07.339557905 +0000 UTC m=+44.287952369" Nov 8 00:21:07.417345 containerd[1462]: time="2025-11-08T00:21:07.416229821Z" level=info msg="StopPodSandbox for \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\"" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.506 [INFO][3880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.506 [INFO][3880] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" iface="eth0" netns="/var/run/netns/cni-ddf48bf1-b161-b560-d780-450905c970b1" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.507 [INFO][3880] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" iface="eth0" netns="/var/run/netns/cni-ddf48bf1-b161-b560-d780-450905c970b1" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.507 [INFO][3880] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" iface="eth0" netns="/var/run/netns/cni-ddf48bf1-b161-b560-d780-450905c970b1" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.507 [INFO][3880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.507 [INFO][3880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.580 [INFO][3897] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" HandleID="k8s-pod-network.3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Workload="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.580 [INFO][3897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.581 [INFO][3897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.587 [WARNING][3897] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" HandleID="k8s-pod-network.3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Workload="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.587 [INFO][3897] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" HandleID="k8s-pod-network.3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Workload="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.590 [INFO][3897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:07.597520 containerd[1462]: 2025-11-08 00:21:07.594 [INFO][3880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:07.600053 containerd[1462]: time="2025-11-08T00:21:07.600013287Z" level=info msg="TearDown network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\" successfully" Nov 8 00:21:07.600053 containerd[1462]: time="2025-11-08T00:21:07.600054686Z" level=info msg="StopPodSandbox for \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\" returns successfully" Nov 8 00:21:07.601724 systemd[1]: run-netns-cni\x2dddf48bf1\x2db161\x2db560\x2dd780\x2d450905c970b1.mount: Deactivated successfully. Nov 8 00:21:07.821380 kubelet[2509]: I1108 00:21:07.821317 2509 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65117d06-024f-4bb6-a156-fce351c46adb-whisker-backend-key-pair\") pod \"65117d06-024f-4bb6-a156-fce351c46adb\" (UID: \"65117d06-024f-4bb6-a156-fce351c46adb\") " Nov 8 00:21:07.821380 kubelet[2509]: I1108 00:21:07.821387 2509 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65117d06-024f-4bb6-a156-fce351c46adb-whisker-ca-bundle\") pod \"65117d06-024f-4bb6-a156-fce351c46adb\" (UID: \"65117d06-024f-4bb6-a156-fce351c46adb\") " Nov 8 00:21:07.821595 kubelet[2509]: I1108 00:21:07.821416 2509 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g66tw\" (UniqueName: \"kubernetes.io/projected/65117d06-024f-4bb6-a156-fce351c46adb-kube-api-access-g66tw\") pod \"65117d06-024f-4bb6-a156-fce351c46adb\" (UID: \"65117d06-024f-4bb6-a156-fce351c46adb\") " Nov 8 00:21:07.822368 kubelet[2509]: I1108 00:21:07.822285 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65117d06-024f-4bb6-a156-fce351c46adb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "65117d06-024f-4bb6-a156-fce351c46adb" (UID: "65117d06-024f-4bb6-a156-fce351c46adb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:21:07.825847 kubelet[2509]: I1108 00:21:07.825794 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65117d06-024f-4bb6-a156-fce351c46adb-kube-api-access-g66tw" (OuterVolumeSpecName: "kube-api-access-g66tw") pod "65117d06-024f-4bb6-a156-fce351c46adb" (UID: "65117d06-024f-4bb6-a156-fce351c46adb"). InnerVolumeSpecName "kube-api-access-g66tw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:21:07.826557 kubelet[2509]: I1108 00:21:07.826508 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65117d06-024f-4bb6-a156-fce351c46adb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "65117d06-024f-4bb6-a156-fce351c46adb" (UID: "65117d06-024f-4bb6-a156-fce351c46adb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:21:07.828234 systemd[1]: var-lib-kubelet-pods-65117d06\x2d024f\x2d4bb6\x2da156\x2dfce351c46adb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg66tw.mount: Deactivated successfully. Nov 8 00:21:07.828366 systemd[1]: var-lib-kubelet-pods-65117d06\x2d024f\x2d4bb6\x2da156\x2dfce351c46adb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:21:07.922076 kubelet[2509]: I1108 00:21:07.922025 2509 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65117d06-024f-4bb6-a156-fce351c46adb-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 8 00:21:07.922076 kubelet[2509]: I1108 00:21:07.922057 2509 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g66tw\" (UniqueName: \"kubernetes.io/projected/65117d06-024f-4bb6-a156-fce351c46adb-kube-api-access-g66tw\") on node \"localhost\" DevicePath \"\"" Nov 8 00:21:07.922076 kubelet[2509]: I1108 00:21:07.922066 2509 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65117d06-024f-4bb6-a156-fce351c46adb-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 8 00:21:08.171912 containerd[1462]: time="2025-11-08T00:21:08.171793589Z" level=info msg="StopPodSandbox for \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\"" Nov 8 00:21:08.172464 containerd[1462]: time="2025-11-08T00:21:08.172124343Z" level=info msg="StopPodSandbox for \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\"" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.225 [INFO][3940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.226 [INFO][3940] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" iface="eth0" netns="/var/run/netns/cni-61b4dbb2-b5d5-fa29-6d87-f2168c219e1d" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.229 [INFO][3940] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" iface="eth0" netns="/var/run/netns/cni-61b4dbb2-b5d5-fa29-6d87-f2168c219e1d" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.229 [INFO][3940] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" iface="eth0" netns="/var/run/netns/cni-61b4dbb2-b5d5-fa29-6d87-f2168c219e1d" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.230 [INFO][3940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.230 [INFO][3940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.257 [INFO][3960] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" HandleID="k8s-pod-network.ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.258 [INFO][3960] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.258 [INFO][3960] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.264 [WARNING][3960] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" HandleID="k8s-pod-network.ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.264 [INFO][3960] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" HandleID="k8s-pod-network.ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.266 [INFO][3960] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:08.273594 containerd[1462]: 2025-11-08 00:21:08.270 [INFO][3940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:08.277170 containerd[1462]: time="2025-11-08T00:21:08.277002479Z" level=info msg="TearDown network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\" successfully" Nov 8 00:21:08.277170 containerd[1462]: time="2025-11-08T00:21:08.277043586Z" level=info msg="StopPodSandbox for \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\" returns successfully" Nov 8 00:21:08.282084 containerd[1462]: time="2025-11-08T00:21:08.282035634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6477c478b5-xfnk2,Uid:779024e8-f065-402a-9618-c2d1616b455b,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.224 [INFO][3941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.226 [INFO][3941] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" iface="eth0" netns="/var/run/netns/cni-d1659c7e-b54e-8817-8c1e-dadbd16fa3b3" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.226 [INFO][3941] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" iface="eth0" netns="/var/run/netns/cni-d1659c7e-b54e-8817-8c1e-dadbd16fa3b3" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.227 [INFO][3941] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" iface="eth0" netns="/var/run/netns/cni-d1659c7e-b54e-8817-8c1e-dadbd16fa3b3" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.227 [INFO][3941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.227 [INFO][3941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.263 [INFO][3955] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" HandleID="k8s-pod-network.a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.263 [INFO][3955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.266 [INFO][3955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.272 [WARNING][3955] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" HandleID="k8s-pod-network.a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.273 [INFO][3955] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" HandleID="k8s-pod-network.a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.274 [INFO][3955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:08.283001 containerd[1462]: 2025-11-08 00:21:08.278 [INFO][3941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:08.284928 containerd[1462]: time="2025-11-08T00:21:08.283211944Z" level=info msg="TearDown network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\" successfully" Nov 8 00:21:08.284928 containerd[1462]: time="2025-11-08T00:21:08.283231180Z" level=info msg="StopPodSandbox for \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\" returns successfully" Nov 8 00:21:08.283313 systemd[1]: run-netns-cni\x2d61b4dbb2\x2db5d5\x2dfa29\x2d6d87\x2df2168c219e1d.mount: Deactivated successfully. Nov 8 00:21:08.287382 systemd[1]: run-netns-cni\x2dd1659c7e\x2db54e\x2d8817\x2d8c1e\x2ddadbd16fa3b3.mount: Deactivated successfully. Nov 8 00:21:08.287979 containerd[1462]: time="2025-11-08T00:21:08.287516184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564bf5b6db-26fpn,Uid:e3df79a4-2d69-4d1b-a3d8-5080134a94f0,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:08.323917 kubelet[2509]: E1108 00:21:08.323786 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:08.341554 systemd[1]: Removed slice kubepods-besteffort-pod65117d06_024f_4bb6_a156_fce351c46adb.slice - libcontainer container kubepods-besteffort-pod65117d06_024f_4bb6_a156_fce351c46adb.slice. Nov 8 00:21:08.404963 systemd[1]: Created slice kubepods-besteffort-pod6cb2e068_b098_433b_ba03_a3d8a7a50da8.slice - libcontainer container kubepods-besteffort-pod6cb2e068_b098_433b_ba03_a3d8a7a50da8.slice. Nov 8 00:21:08.427195 kubelet[2509]: I1108 00:21:08.426248 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cb2e068-b098-433b-ba03-a3d8a7a50da8-whisker-ca-bundle\") pod \"whisker-56944ff74d-jfjjh\" (UID: \"6cb2e068-b098-433b-ba03-a3d8a7a50da8\") " pod="calico-system/whisker-56944ff74d-jfjjh" Nov 8 00:21:08.427195 kubelet[2509]: I1108 00:21:08.426291 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6cb2e068-b098-433b-ba03-a3d8a7a50da8-whisker-backend-key-pair\") pod \"whisker-56944ff74d-jfjjh\" (UID: \"6cb2e068-b098-433b-ba03-a3d8a7a50da8\") " pod="calico-system/whisker-56944ff74d-jfjjh" Nov 8 00:21:08.427195 kubelet[2509]: I1108 00:21:08.426305 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrfsr\" (UniqueName: \"kubernetes.io/projected/6cb2e068-b098-433b-ba03-a3d8a7a50da8-kube-api-access-wrfsr\") pod \"whisker-56944ff74d-jfjjh\" (UID: \"6cb2e068-b098-433b-ba03-a3d8a7a50da8\") " pod="calico-system/whisker-56944ff74d-jfjjh" Nov 8 00:21:08.501840 systemd-networkd[1407]: cali80dbf584060: Link UP Nov 8 00:21:08.503883 systemd-networkd[1407]: cali80dbf584060: Gained carrier Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.368 [INFO][3976] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.393 [INFO][3976] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0 calico-apiserver-6477c478b5- calico-apiserver 779024e8-f065-402a-9618-c2d1616b455b 968 0 2025-11-08 00:20:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6477c478b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6477c478b5-xfnk2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali80dbf584060 [] [] }} ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-xfnk2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.394 [INFO][3976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-xfnk2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.451 [INFO][4025] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" HandleID="k8s-pod-network.504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.451 [INFO][4025] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" HandleID="k8s-pod-network.504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6477c478b5-xfnk2", "timestamp":"2025-11-08 00:21:08.451135803 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.451 [INFO][4025] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.451 [INFO][4025] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.451 [INFO][4025] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.459 [INFO][4025] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" host="localhost" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.469 [INFO][4025] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.473 [INFO][4025] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.475 [INFO][4025] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.478 [INFO][4025] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.478 [INFO][4025] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" host="localhost" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.479 [INFO][4025] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.484 [INFO][4025] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" host="localhost" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.488 [INFO][4025] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" host="localhost" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.488 [INFO][4025] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" host="localhost" Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.488 [INFO][4025] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:08.520499 containerd[1462]: 2025-11-08 00:21:08.488 [INFO][4025] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" HandleID="k8s-pod-network.504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.521369 containerd[1462]: 2025-11-08 00:21:08.491 [INFO][3976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-xfnk2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0", GenerateName:"calico-apiserver-6477c478b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"779024e8-f065-402a-9618-c2d1616b455b", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6477c478b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6477c478b5-xfnk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80dbf584060", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:08.521369 containerd[1462]: 2025-11-08 00:21:08.492 [INFO][3976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-xfnk2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.521369 containerd[1462]: 2025-11-08 00:21:08.492 [INFO][3976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80dbf584060 ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-xfnk2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.521369 containerd[1462]: 2025-11-08 00:21:08.502 [INFO][3976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-xfnk2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.521369 containerd[1462]: 2025-11-08 00:21:08.502 [INFO][3976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-xfnk2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0", GenerateName:"calico-apiserver-6477c478b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"779024e8-f065-402a-9618-c2d1616b455b", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6477c478b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac", Pod:"calico-apiserver-6477c478b5-xfnk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80dbf584060", MAC:"16:65:6f:81:10:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:08.521369 containerd[1462]: 2025-11-08 00:21:08.517 [INFO][3976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-xfnk2" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:08.553691 containerd[1462]: time="2025-11-08T00:21:08.553570578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:08.554073 containerd[1462]: time="2025-11-08T00:21:08.553850226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:08.554073 containerd[1462]: time="2025-11-08T00:21:08.553898406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:08.554073 containerd[1462]: time="2025-11-08T00:21:08.554019515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:08.583031 systemd[1]: Started cri-containerd-504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac.scope - libcontainer container 504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac. Nov 8 00:21:08.599585 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:08.601077 systemd-networkd[1407]: calie4d37c79f80: Link UP Nov 8 00:21:08.601942 systemd-networkd[1407]: calie4d37c79f80: Gained carrier Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.381 [INFO][3989] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.406 [INFO][3989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0 calico-kube-controllers-564bf5b6db- calico-system e3df79a4-2d69-4d1b-a3d8-5080134a94f0 967 0 2025-11-08 00:20:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:564bf5b6db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-564bf5b6db-26fpn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie4d37c79f80 [] [] }} ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Namespace="calico-system" Pod="calico-kube-controllers-564bf5b6db-26fpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.406 [INFO][3989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Namespace="calico-system" Pod="calico-kube-controllers-564bf5b6db-26fpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.451 [INFO][4031] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" HandleID="k8s-pod-network.38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.451 [INFO][4031] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" HandleID="k8s-pod-network.38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043b9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-564bf5b6db-26fpn", "timestamp":"2025-11-08 00:21:08.451492165 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.451 [INFO][4031] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.488 [INFO][4031] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.488 [INFO][4031] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.560 [INFO][4031] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" host="localhost" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.571 [INFO][4031] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.576 [INFO][4031] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.578 [INFO][4031] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.580 [INFO][4031] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.580 [INFO][4031] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" host="localhost" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.581 [INFO][4031] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646 Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.585 [INFO][4031] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" host="localhost" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.591 [INFO][4031] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" host="localhost" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.591 [INFO][4031] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" host="localhost" Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.592 [INFO][4031] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:08.620353 containerd[1462]: 2025-11-08 00:21:08.592 [INFO][4031] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" HandleID="k8s-pod-network.38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.620977 containerd[1462]: 2025-11-08 00:21:08.596 [INFO][3989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Namespace="calico-system" Pod="calico-kube-controllers-564bf5b6db-26fpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0", GenerateName:"calico-kube-controllers-564bf5b6db-", Namespace:"calico-system", SelfLink:"", UID:"e3df79a4-2d69-4d1b-a3d8-5080134a94f0", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564bf5b6db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-564bf5b6db-26fpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4d37c79f80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:08.620977 containerd[1462]: 2025-11-08 00:21:08.596 [INFO][3989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Namespace="calico-system" Pod="calico-kube-controllers-564bf5b6db-26fpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.620977 containerd[1462]: 2025-11-08 00:21:08.596 [INFO][3989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4d37c79f80 ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Namespace="calico-system" Pod="calico-kube-controllers-564bf5b6db-26fpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.620977 containerd[1462]: 2025-11-08 00:21:08.604 [INFO][3989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Namespace="calico-system" Pod="calico-kube-controllers-564bf5b6db-26fpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.620977 containerd[1462]: 2025-11-08 00:21:08.604 [INFO][3989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Namespace="calico-system" Pod="calico-kube-controllers-564bf5b6db-26fpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0", GenerateName:"calico-kube-controllers-564bf5b6db-", Namespace:"calico-system", SelfLink:"", UID:"e3df79a4-2d69-4d1b-a3d8-5080134a94f0", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564bf5b6db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646", Pod:"calico-kube-controllers-564bf5b6db-26fpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4d37c79f80", MAC:"9a:78:e9:51:1f:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:08.620977 containerd[1462]: 2025-11-08 00:21:08.616 [INFO][3989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646" Namespace="calico-system" Pod="calico-kube-controllers-564bf5b6db-26fpn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:08.635042 containerd[1462]: time="2025-11-08T00:21:08.634996007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6477c478b5-xfnk2,Uid:779024e8-f065-402a-9618-c2d1616b455b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac\"" Nov 8 00:21:08.637613 containerd[1462]: time="2025-11-08T00:21:08.637576305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:08.641759 containerd[1462]: time="2025-11-08T00:21:08.641017187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:08.641759 containerd[1462]: time="2025-11-08T00:21:08.641710935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:08.641759 containerd[1462]: time="2025-11-08T00:21:08.641725082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:08.641962 containerd[1462]: time="2025-11-08T00:21:08.641813047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:08.670017 systemd[1]: Started cri-containerd-38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646.scope - libcontainer container 38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646. Nov 8 00:21:08.683834 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:08.715557 containerd[1462]: time="2025-11-08T00:21:08.714840974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56944ff74d-jfjjh,Uid:6cb2e068-b098-433b-ba03-a3d8a7a50da8,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:08.718223 containerd[1462]: time="2025-11-08T00:21:08.718159024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564bf5b6db-26fpn,Uid:e3df79a4-2d69-4d1b-a3d8-5080134a94f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646\"" Nov 8 00:21:08.824726 systemd-networkd[1407]: caliae06aaf710b: Link UP Nov 8 00:21:08.826641 systemd-networkd[1407]: caliae06aaf710b: Gained carrier Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.750 [INFO][4141] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.761 [INFO][4141] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--56944ff74d--jfjjh-eth0 whisker-56944ff74d- calico-system 6cb2e068-b098-433b-ba03-a3d8a7a50da8 982 0 2025-11-08 00:21:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56944ff74d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-56944ff74d-jfjjh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliae06aaf710b [] [] }} ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Namespace="calico-system" Pod="whisker-56944ff74d-jfjjh" WorkloadEndpoint="localhost-k8s-whisker--56944ff74d--jfjjh-" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.761 [INFO][4141] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Namespace="calico-system" Pod="whisker-56944ff74d-jfjjh" WorkloadEndpoint="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.789 [INFO][4155] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" HandleID="k8s-pod-network.8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Workload="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.789 [INFO][4155] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" HandleID="k8s-pod-network.8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Workload="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001356d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-56944ff74d-jfjjh", "timestamp":"2025-11-08 00:21:08.789712729 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.790 [INFO][4155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.790 [INFO][4155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.790 [INFO][4155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.796 [INFO][4155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" host="localhost" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.800 [INFO][4155] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.803 [INFO][4155] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.805 [INFO][4155] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.806 [INFO][4155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.806 [INFO][4155] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" host="localhost" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.807 [INFO][4155] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4 Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.810 [INFO][4155] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" host="localhost" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.817 [INFO][4155] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" host="localhost" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.817 [INFO][4155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" host="localhost" Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.817 [INFO][4155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:08.845690 containerd[1462]: 2025-11-08 00:21:08.817 [INFO][4155] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" HandleID="k8s-pod-network.8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Workload="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" Nov 8 00:21:08.846575 containerd[1462]: 2025-11-08 00:21:08.822 [INFO][4141] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Namespace="calico-system" Pod="whisker-56944ff74d-jfjjh" WorkloadEndpoint="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56944ff74d--jfjjh-eth0", GenerateName:"whisker-56944ff74d-", Namespace:"calico-system", SelfLink:"", UID:"6cb2e068-b098-433b-ba03-a3d8a7a50da8", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56944ff74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-56944ff74d-jfjjh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae06aaf710b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:08.846575 containerd[1462]: 2025-11-08 00:21:08.822 [INFO][4141] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Namespace="calico-system" Pod="whisker-56944ff74d-jfjjh" WorkloadEndpoint="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" Nov 8 00:21:08.846575 containerd[1462]: 2025-11-08 00:21:08.822 [INFO][4141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae06aaf710b ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Namespace="calico-system" Pod="whisker-56944ff74d-jfjjh" WorkloadEndpoint="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" Nov 8 00:21:08.846575 containerd[1462]: 2025-11-08 00:21:08.824 [INFO][4141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Namespace="calico-system" Pod="whisker-56944ff74d-jfjjh" WorkloadEndpoint="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" Nov 8 00:21:08.846575 containerd[1462]: 2025-11-08 00:21:08.825 [INFO][4141] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Namespace="calico-system" Pod="whisker-56944ff74d-jfjjh" WorkloadEndpoint="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56944ff74d--jfjjh-eth0", GenerateName:"whisker-56944ff74d-", Namespace:"calico-system", SelfLink:"", UID:"6cb2e068-b098-433b-ba03-a3d8a7a50da8", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56944ff74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4", Pod:"whisker-56944ff74d-jfjjh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae06aaf710b", MAC:"f2:a9:99:7a:cc:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:08.846575 containerd[1462]: 2025-11-08 00:21:08.838 [INFO][4141] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4" Namespace="calico-system" Pod="whisker-56944ff74d-jfjjh" WorkloadEndpoint="localhost-k8s-whisker--56944ff74d--jfjjh-eth0" Nov 8 00:21:08.879282 containerd[1462]: time="2025-11-08T00:21:08.878894591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:08.879282 containerd[1462]: time="2025-11-08T00:21:08.878969282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:08.879282 containerd[1462]: time="2025-11-08T00:21:08.879102082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:08.879663 containerd[1462]: time="2025-11-08T00:21:08.879436483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:08.910232 systemd[1]: Started cri-containerd-8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4.scope - libcontainer container 8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4. Nov 8 00:21:08.938569 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:08.983452 containerd[1462]: time="2025-11-08T00:21:08.983375647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56944ff74d-jfjjh,Uid:6cb2e068-b098-433b-ba03-a3d8a7a50da8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8912a3f7f8beaef912b1dac4c7b861e3836056d182a5ceff5cbf546a5389d4a4\"" Nov 8 00:21:08.997079 containerd[1462]: time="2025-11-08T00:21:08.996989812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:09.004444 containerd[1462]: time="2025-11-08T00:21:09.004355297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:09.022959 containerd[1462]: time="2025-11-08T00:21:09.012511764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:09.025277 kubelet[2509]: E1108 00:21:09.023302 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:09.025277 kubelet[2509]: E1108 00:21:09.023364 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:09.025277 kubelet[2509]: E1108 00:21:09.023548 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6477c478b5-xfnk2_calico-apiserver(779024e8-f065-402a-9618-c2d1616b455b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:09.025277 kubelet[2509]: E1108 00:21:09.023593 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" podUID="779024e8-f065-402a-9618-c2d1616b455b" Nov 8 00:21:09.025562 containerd[1462]: time="2025-11-08T00:21:09.024227305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:21:09.127048 kernel: bpftool[4318]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:21:09.174642 kubelet[2509]: I1108 00:21:09.174600 2509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65117d06-024f-4bb6-a156-fce351c46adb" path="/var/lib/kubelet/pods/65117d06-024f-4bb6-a156-fce351c46adb/volumes" Nov 8 00:21:09.330376 kubelet[2509]: E1108 00:21:09.330239 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" podUID="779024e8-f065-402a-9618-c2d1616b455b" Nov 8 00:21:09.384126 containerd[1462]: time="2025-11-08T00:21:09.384082437Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:09.386618 containerd[1462]: time="2025-11-08T00:21:09.386559340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:21:09.386716 containerd[1462]: time="2025-11-08T00:21:09.386604615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:09.386997 kubelet[2509]: E1108 00:21:09.386921 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:09.387047 kubelet[2509]: E1108 00:21:09.387008 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:09.387188 kubelet[2509]: E1108 00:21:09.387144 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-564bf5b6db-26fpn_calico-system(e3df79a4-2d69-4d1b-a3d8-5080134a94f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:09.387316 kubelet[2509]: E1108 00:21:09.387253 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" podUID="e3df79a4-2d69-4d1b-a3d8-5080134a94f0" Nov 8 00:21:09.388809 containerd[1462]: time="2025-11-08T00:21:09.388333326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:21:09.391413 systemd-networkd[1407]: vxlan.calico: Link UP Nov 8 00:21:09.391432 systemd-networkd[1407]: vxlan.calico: Gained carrier Nov 8 00:21:09.626360 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:43732.service - OpenSSH per-connection server daemon (10.0.0.1:43732). Nov 8 00:21:09.668914 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 43732 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:09.671142 sshd[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:09.678331 systemd-logind[1454]: New session 9 of user core. Nov 8 00:21:09.687046 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:21:09.744598 containerd[1462]: time="2025-11-08T00:21:09.744550066Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:09.745565 containerd[1462]: time="2025-11-08T00:21:09.745535775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:21:09.745627 containerd[1462]: time="2025-11-08T00:21:09.745583785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:21:09.745808 kubelet[2509]: E1108 00:21:09.745764 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:09.745858 kubelet[2509]: E1108 00:21:09.745816 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:09.745945 kubelet[2509]: E1108 00:21:09.745925 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-56944ff74d-jfjjh_calico-system(6cb2e068-b098-433b-ba03-a3d8a7a50da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:09.746680 containerd[1462]: time="2025-11-08T00:21:09.746614961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:21:09.817203 sshd[4382]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:09.821659 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:43732.service: Deactivated successfully. Nov 8 00:21:09.823676 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:21:09.824407 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:21:09.825316 systemd-logind[1454]: Removed session 9. Nov 8 00:21:10.018023 systemd-networkd[1407]: cali80dbf584060: Gained IPv6LL Nov 8 00:21:10.113280 containerd[1462]: time="2025-11-08T00:21:10.113235656Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:10.118498 containerd[1462]: time="2025-11-08T00:21:10.118453539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:21:10.118635 containerd[1462]: time="2025-11-08T00:21:10.118560301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:10.118733 kubelet[2509]: E1108 00:21:10.118695 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:10.118838 kubelet[2509]: E1108 00:21:10.118741 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:10.118838 kubelet[2509]: E1108 00:21:10.118819 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-56944ff74d-jfjjh_calico-system(6cb2e068-b098-433b-ba03-a3d8a7a50da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:10.118947 kubelet[2509]: E1108 00:21:10.118880 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56944ff74d-jfjjh" podUID="6cb2e068-b098-433b-ba03-a3d8a7a50da8" Nov 8 00:21:10.171210 containerd[1462]: time="2025-11-08T00:21:10.171167170Z" level=info msg="StopPodSandbox for \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\"" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.220 [INFO][4442] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.220 [INFO][4442] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" iface="eth0" netns="/var/run/netns/cni-b0cf4d75-4e41-b84e-138a-5d41d52f2c84" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.220 [INFO][4442] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" iface="eth0" netns="/var/run/netns/cni-b0cf4d75-4e41-b84e-138a-5d41d52f2c84" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.221 [INFO][4442] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" iface="eth0" netns="/var/run/netns/cni-b0cf4d75-4e41-b84e-138a-5d41d52f2c84" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.221 [INFO][4442] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.221 [INFO][4442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.243 [INFO][4452] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" HandleID="k8s-pod-network.341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.243 [INFO][4452] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.243 [INFO][4452] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.251 [WARNING][4452] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" HandleID="k8s-pod-network.341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.252 [INFO][4452] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" HandleID="k8s-pod-network.341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.253 [INFO][4452] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:10.259712 containerd[1462]: 2025-11-08 00:21:10.256 [INFO][4442] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:10.260140 containerd[1462]: time="2025-11-08T00:21:10.259981167Z" level=info msg="TearDown network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\" successfully" Nov 8 00:21:10.260140 containerd[1462]: time="2025-11-08T00:21:10.260015502Z" level=info msg="StopPodSandbox for \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\" returns successfully" Nov 8 00:21:10.263562 systemd[1]: run-netns-cni\x2db0cf4d75\x2d4e41\x2db84e\x2d138a\x2d5d41d52f2c84.mount: Deactivated successfully. Nov 8 00:21:10.264826 kubelet[2509]: E1108 00:21:10.264790 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:10.265274 containerd[1462]: time="2025-11-08T00:21:10.265241379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8f6fz,Uid:9e0d5390-5a60-44e9-a40d-847919eb2c6d,Namespace:kube-system,Attempt:1,}" Nov 8 00:21:10.335759 kubelet[2509]: E1108 00:21:10.335140 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" podUID="779024e8-f065-402a-9618-c2d1616b455b" Nov 8 00:21:10.335759 kubelet[2509]: E1108 00:21:10.335206 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" podUID="e3df79a4-2d69-4d1b-a3d8-5080134a94f0" Nov 8 00:21:10.336411 kubelet[2509]: E1108 00:21:10.336315 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56944ff74d-jfjjh" podUID="6cb2e068-b098-433b-ba03-a3d8a7a50da8" Nov 8 00:21:10.389892 systemd-networkd[1407]: calief257d0dc87: Link UP Nov 8 00:21:10.390519 systemd-networkd[1407]: calief257d0dc87: Gained carrier Nov 8 00:21:10.402974 systemd-networkd[1407]: caliae06aaf710b: Gained IPv6LL Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.314 [INFO][4461] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--8f6fz-eth0 coredns-66bc5c9577- kube-system 9e0d5390-5a60-44e9-a40d-847919eb2c6d 1027 0 2025-11-08 00:20:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-8f6fz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calief257d0dc87 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Namespace="kube-system" Pod="coredns-66bc5c9577-8f6fz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8f6fz-" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.314 [INFO][4461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Namespace="kube-system" Pod="coredns-66bc5c9577-8f6fz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.345 [INFO][4474] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" HandleID="k8s-pod-network.9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.345 [INFO][4474] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" HandleID="k8s-pod-network.9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-8f6fz", "timestamp":"2025-11-08 00:21:10.345149917 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.345 [INFO][4474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.345 [INFO][4474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.345 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.352 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" host="localhost" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.365 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.369 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.370 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.372 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.372 [INFO][4474] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" host="localhost" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.374 [INFO][4474] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453 Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.377 [INFO][4474] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" host="localhost" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.383 [INFO][4474] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" host="localhost" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.383 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" host="localhost" Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.383 [INFO][4474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:10.411348 containerd[1462]: 2025-11-08 00:21:10.383 [INFO][4474] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" HandleID="k8s-pod-network.9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.412259 containerd[1462]: 2025-11-08 00:21:10.387 [INFO][4461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Namespace="kube-system" Pod="coredns-66bc5c9577-8f6fz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8f6fz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9e0d5390-5a60-44e9-a40d-847919eb2c6d", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-8f6fz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief257d0dc87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:10.412259 containerd[1462]: 2025-11-08 00:21:10.387 [INFO][4461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Namespace="kube-system" Pod="coredns-66bc5c9577-8f6fz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.412259 containerd[1462]: 2025-11-08 00:21:10.387 [INFO][4461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief257d0dc87 ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Namespace="kube-system" Pod="coredns-66bc5c9577-8f6fz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.412259 containerd[1462]: 2025-11-08 00:21:10.390 [INFO][4461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Namespace="kube-system" Pod="coredns-66bc5c9577-8f6fz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.412259 containerd[1462]: 2025-11-08 00:21:10.391 [INFO][4461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Namespace="kube-system" Pod="coredns-66bc5c9577-8f6fz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8f6fz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9e0d5390-5a60-44e9-a40d-847919eb2c6d", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453", Pod:"coredns-66bc5c9577-8f6fz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief257d0dc87", MAC:"3e:f6:20:46:90:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:10.412259 containerd[1462]: 2025-11-08 00:21:10.407 [INFO][4461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453" Namespace="kube-system" Pod="coredns-66bc5c9577-8f6fz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:10.434722 containerd[1462]: time="2025-11-08T00:21:10.434576789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:10.434722 containerd[1462]: time="2025-11-08T00:21:10.434682458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:10.434942 containerd[1462]: time="2025-11-08T00:21:10.434707626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:10.434942 containerd[1462]: time="2025-11-08T00:21:10.434901101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:10.463016 systemd[1]: Started cri-containerd-9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453.scope - libcontainer container 9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453. Nov 8 00:21:10.476978 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:10.503731 containerd[1462]: time="2025-11-08T00:21:10.503678844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8f6fz,Uid:9e0d5390-5a60-44e9-a40d-847919eb2c6d,Namespace:kube-system,Attempt:1,} returns sandbox id \"9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453\"" Nov 8 00:21:10.504478 kubelet[2509]: E1108 00:21:10.504444 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:10.510504 containerd[1462]: time="2025-11-08T00:21:10.510452982Z" level=info msg="CreateContainer within sandbox \"9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:21:10.529995 systemd-networkd[1407]: calie4d37c79f80: Gained IPv6LL Nov 8 00:21:10.552169 containerd[1462]: time="2025-11-08T00:21:10.552126178Z" level=info msg="CreateContainer within sandbox \"9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ccc1645ca3051ae4e1654074496f83c12a8d8582292cc59ea0f75a10c8cf9901\"" Nov 8 00:21:10.553549 containerd[1462]: time="2025-11-08T00:21:10.552685964Z" level=info msg="StartContainer for \"ccc1645ca3051ae4e1654074496f83c12a8d8582292cc59ea0f75a10c8cf9901\"" Nov 8 00:21:10.584018 systemd[1]: Started cri-containerd-ccc1645ca3051ae4e1654074496f83c12a8d8582292cc59ea0f75a10c8cf9901.scope - libcontainer container ccc1645ca3051ae4e1654074496f83c12a8d8582292cc59ea0f75a10c8cf9901. Nov 8 00:21:10.640784 containerd[1462]: time="2025-11-08T00:21:10.640597447Z" level=info msg="StartContainer for \"ccc1645ca3051ae4e1654074496f83c12a8d8582292cc59ea0f75a10c8cf9901\" returns successfully" Nov 8 00:21:11.106068 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL Nov 8 00:21:11.171554 containerd[1462]: time="2025-11-08T00:21:11.171502355Z" level=info msg="StopPodSandbox for \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\"" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.222 [INFO][4586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.222 [INFO][4586] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" iface="eth0" netns="/var/run/netns/cni-0b4605e7-8631-6d46-5a0c-cc2683753574" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.222 [INFO][4586] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" iface="eth0" netns="/var/run/netns/cni-0b4605e7-8631-6d46-5a0c-cc2683753574" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.225 [INFO][4586] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" iface="eth0" netns="/var/run/netns/cni-0b4605e7-8631-6d46-5a0c-cc2683753574" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.225 [INFO][4586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.225 [INFO][4586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.248 [INFO][4595] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" HandleID="k8s-pod-network.3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.248 [INFO][4595] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.248 [INFO][4595] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.254 [WARNING][4595] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" HandleID="k8s-pod-network.3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.254 [INFO][4595] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" HandleID="k8s-pod-network.3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.256 [INFO][4595] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:11.262514 containerd[1462]: 2025-11-08 00:21:11.259 [INFO][4586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:11.263131 containerd[1462]: time="2025-11-08T00:21:11.262649947Z" level=info msg="TearDown network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\" successfully" Nov 8 00:21:11.263131 containerd[1462]: time="2025-11-08T00:21:11.262679282Z" level=info msg="StopPodSandbox for \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\" returns successfully" Nov 8 00:21:11.266017 kubelet[2509]: E1108 00:21:11.265980 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:11.266480 containerd[1462]: time="2025-11-08T00:21:11.266413095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6z69l,Uid:6095f60b-9a5f-4061-ba74-c474c415b963,Namespace:kube-system,Attempt:1,}" Nov 8 00:21:11.282727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063995345.mount: Deactivated successfully. Nov 8 00:21:11.283246 systemd[1]: run-netns-cni\x2d0b4605e7\x2d8631\x2d6d46\x2d5a0c\x2dcc2683753574.mount: Deactivated successfully. Nov 8 00:21:11.349938 kubelet[2509]: E1108 00:21:11.349898 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:11.364774 kubelet[2509]: I1108 00:21:11.364595 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8f6fz" podStartSLOduration=41.364569487 podStartE2EDuration="41.364569487s" podCreationTimestamp="2025-11-08 00:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:11.364197446 +0000 UTC m=+48.312591900" watchObservedRunningTime="2025-11-08 00:21:11.364569487 +0000 UTC m=+48.312963952" Nov 8 00:21:11.424537 systemd-networkd[1407]: cali9bbc8317a2e: Link UP Nov 8 00:21:11.425539 systemd-networkd[1407]: cali9bbc8317a2e: Gained carrier Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.315 [INFO][4604] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--6z69l-eth0 coredns-66bc5c9577- kube-system 6095f60b-9a5f-4061-ba74-c474c415b963 1058 0 2025-11-08 00:20:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-6z69l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9bbc8317a2e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Namespace="kube-system" Pod="coredns-66bc5c9577-6z69l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6z69l-" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.315 [INFO][4604] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Namespace="kube-system" Pod="coredns-66bc5c9577-6z69l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.353 [INFO][4618] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" HandleID="k8s-pod-network.f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.357 [INFO][4618] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" HandleID="k8s-pod-network.f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-6z69l", "timestamp":"2025-11-08 00:21:11.353899803 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.358 [INFO][4618] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.358 [INFO][4618] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.358 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.370 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" host="localhost" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.378 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.392 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.396 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.400 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.400 [INFO][4618] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" host="localhost" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.402 [INFO][4618] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.407 [INFO][4618] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" host="localhost" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.416 [INFO][4618] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" host="localhost" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.416 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" host="localhost" Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.416 [INFO][4618] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:11.450912 containerd[1462]: 2025-11-08 00:21:11.416 [INFO][4618] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" HandleID="k8s-pod-network.f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.452567 containerd[1462]: 2025-11-08 00:21:11.421 [INFO][4604] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Namespace="kube-system" Pod="coredns-66bc5c9577-6z69l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--6z69l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6095f60b-9a5f-4061-ba74-c474c415b963", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-6z69l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bbc8317a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:11.452567 containerd[1462]: 2025-11-08 00:21:11.421 [INFO][4604] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Namespace="kube-system" Pod="coredns-66bc5c9577-6z69l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.452567 containerd[1462]: 2025-11-08 00:21:11.421 [INFO][4604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bbc8317a2e ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Namespace="kube-system" Pod="coredns-66bc5c9577-6z69l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.452567 containerd[1462]: 2025-11-08 00:21:11.425 [INFO][4604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Namespace="kube-system" Pod="coredns-66bc5c9577-6z69l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.452567 containerd[1462]: 2025-11-08 00:21:11.426 [INFO][4604] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Namespace="kube-system" Pod="coredns-66bc5c9577-6z69l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--6z69l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6095f60b-9a5f-4061-ba74-c474c415b963", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d", Pod:"coredns-66bc5c9577-6z69l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bbc8317a2e", MAC:"6a:83:ca:3f:57:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:11.452567 containerd[1462]: 2025-11-08 00:21:11.442 [INFO][4604] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d" Namespace="kube-system" Pod="coredns-66bc5c9577-6z69l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:11.489322 containerd[1462]: time="2025-11-08T00:21:11.489155929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:11.489322 containerd[1462]: time="2025-11-08T00:21:11.489265576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:11.489322 containerd[1462]: time="2025-11-08T00:21:11.489309047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:11.489839 containerd[1462]: time="2025-11-08T00:21:11.489451947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:11.517138 systemd[1]: run-containerd-runc-k8s.io-f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d-runc.lRUt58.mount: Deactivated successfully. Nov 8 00:21:11.530088 systemd[1]: Started cri-containerd-f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d.scope - libcontainer container f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d. Nov 8 00:21:11.548924 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:11.579594 containerd[1462]: time="2025-11-08T00:21:11.579513883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6z69l,Uid:6095f60b-9a5f-4061-ba74-c474c415b963,Namespace:kube-system,Attempt:1,} returns sandbox id \"f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d\"" Nov 8 00:21:11.580794 kubelet[2509]: E1108 00:21:11.580744 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:11.587780 containerd[1462]: time="2025-11-08T00:21:11.587723005Z" level=info msg="CreateContainer within sandbox \"f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:21:11.604891 containerd[1462]: time="2025-11-08T00:21:11.604836373Z" level=info msg="CreateContainer within sandbox \"f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f759375b3e11e6dbc9417185e4b785b2af3104c197f1916d4380ce0bef6765c7\"" Nov 8 00:21:11.605777 containerd[1462]: time="2025-11-08T00:21:11.605740428Z" level=info msg="StartContainer for \"f759375b3e11e6dbc9417185e4b785b2af3104c197f1916d4380ce0bef6765c7\"" Nov 8 00:21:11.640038 systemd[1]: Started cri-containerd-f759375b3e11e6dbc9417185e4b785b2af3104c197f1916d4380ce0bef6765c7.scope - libcontainer container f759375b3e11e6dbc9417185e4b785b2af3104c197f1916d4380ce0bef6765c7. Nov 8 00:21:11.681947 containerd[1462]: time="2025-11-08T00:21:11.681466094Z" level=info msg="StartContainer for \"f759375b3e11e6dbc9417185e4b785b2af3104c197f1916d4380ce0bef6765c7\" returns successfully" Nov 8 00:21:12.176096 containerd[1462]: time="2025-11-08T00:21:12.175364326Z" level=info msg="StopPodSandbox for \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\"" Nov 8 00:21:12.176096 containerd[1462]: time="2025-11-08T00:21:12.175349288Z" level=info msg="StopPodSandbox for \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\"" Nov 8 00:21:12.176096 containerd[1462]: time="2025-11-08T00:21:12.175812862Z" level=info msg="StopPodSandbox for \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\"" Nov 8 00:21:12.265397 systemd-networkd[1407]: calief257d0dc87: Gained IPv6LL Nov 8 00:21:12.374995 kubelet[2509]: E1108 00:21:12.374699 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:12.378176 kubelet[2509]: E1108 00:21:12.375161 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:12.406964 kubelet[2509]: I1108 00:21:12.405548 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6z69l" podStartSLOduration=42.40551656 podStartE2EDuration="42.40551656s" podCreationTimestamp="2025-11-08 00:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:12.398402503 +0000 UTC m=+49.346796957" watchObservedRunningTime="2025-11-08 00:21:12.40551656 +0000 UTC m=+49.353911014" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.389 [INFO][4752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.389 [INFO][4752] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" iface="eth0" netns="/var/run/netns/cni-7c418eef-b6c5-1d22-76b6-287676d5af96" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.393 [INFO][4752] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" iface="eth0" netns="/var/run/netns/cni-7c418eef-b6c5-1d22-76b6-287676d5af96" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.395 [INFO][4752] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" iface="eth0" netns="/var/run/netns/cni-7c418eef-b6c5-1d22-76b6-287676d5af96" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.395 [INFO][4752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.395 [INFO][4752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.473 [INFO][4779] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" HandleID="k8s-pod-network.d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.475 [INFO][4779] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.475 [INFO][4779] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.522 [WARNING][4779] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" HandleID="k8s-pod-network.d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.522 [INFO][4779] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" HandleID="k8s-pod-network.d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.527 [INFO][4779] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:12.555386 containerd[1462]: 2025-11-08 00:21:12.542 [INFO][4752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:12.570404 containerd[1462]: time="2025-11-08T00:21:12.562464855Z" level=info msg="TearDown network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\" successfully" Nov 8 00:21:12.570404 containerd[1462]: time="2025-11-08T00:21:12.562554915Z" level=info msg="StopPodSandbox for \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\" returns successfully" Nov 8 00:21:12.568654 systemd[1]: run-netns-cni\x2d7c418eef\x2db6c5\x2d1d22\x2d76b6\x2d287676d5af96.mount: Deactivated successfully. Nov 8 00:21:12.577853 containerd[1462]: time="2025-11-08T00:21:12.577791440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6477c478b5-v47ws,Uid:48366bf3-7c5b-44ee-9949-cb0f73b78d3c,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.378 [INFO][4753] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.381 [INFO][4753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" iface="eth0" netns="/var/run/netns/cni-8fa5f491-c2c5-f59e-b2c5-87b314402f0b" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.382 [INFO][4753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" iface="eth0" netns="/var/run/netns/cni-8fa5f491-c2c5-f59e-b2c5-87b314402f0b" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.382 [INFO][4753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" iface="eth0" netns="/var/run/netns/cni-8fa5f491-c2c5-f59e-b2c5-87b314402f0b" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.382 [INFO][4753] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.382 [INFO][4753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.481 [INFO][4776] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" HandleID="k8s-pod-network.94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.483 [INFO][4776] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.527 [INFO][4776] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.557 [WARNING][4776] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" HandleID="k8s-pod-network.94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.557 [INFO][4776] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" HandleID="k8s-pod-network.94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.560 [INFO][4776] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:12.595174 containerd[1462]: 2025-11-08 00:21:12.583 [INFO][4753] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:12.599716 containerd[1462]: time="2025-11-08T00:21:12.599519708Z" level=info msg="TearDown network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\" successfully" Nov 8 00:21:12.599716 containerd[1462]: time="2025-11-08T00:21:12.599688857Z" level=info msg="StopPodSandbox for \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\" returns successfully" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.395 [INFO][4754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.399 [INFO][4754] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" iface="eth0" netns="/var/run/netns/cni-3744f029-83ea-f47f-621c-a390461a29ba" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.402 [INFO][4754] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" iface="eth0" netns="/var/run/netns/cni-3744f029-83ea-f47f-621c-a390461a29ba" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.403 [INFO][4754] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" iface="eth0" netns="/var/run/netns/cni-3744f029-83ea-f47f-621c-a390461a29ba" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.403 [INFO][4754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.403 [INFO][4754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.547 [INFO][4785] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" HandleID="k8s-pod-network.9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.547 [INFO][4785] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.561 [INFO][4785] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.586 [WARNING][4785] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" HandleID="k8s-pod-network.9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.586 [INFO][4785] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" HandleID="k8s-pod-network.9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.592 [INFO][4785] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:12.611123 containerd[1462]: 2025-11-08 00:21:12.599 [INFO][4754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:12.611123 containerd[1462]: time="2025-11-08T00:21:12.610170405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4wpxj,Uid:6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:12.611696 systemd[1]: run-netns-cni\x2d8fa5f491\x2dc2c5\x2df59e\x2db2c5\x2d87b314402f0b.mount: Deactivated successfully. Nov 8 00:21:12.614313 containerd[1462]: time="2025-11-08T00:21:12.614184405Z" level=info msg="TearDown network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\" successfully" Nov 8 00:21:12.614313 containerd[1462]: time="2025-11-08T00:21:12.614270828Z" level=info msg="StopPodSandbox for \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\" returns successfully" Nov 8 00:21:12.621041 systemd[1]: run-netns-cni\x2d3744f029\x2d83ea\x2df47f\x2d621c\x2da390461a29ba.mount: Deactivated successfully. Nov 8 00:21:12.625418 containerd[1462]: time="2025-11-08T00:21:12.625300829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmjcb,Uid:a7e1a5e5-d1e7-4901-bce6-3563db023294,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:13.027339 systemd-networkd[1407]: cali9bbc8317a2e: Gained IPv6LL Nov 8 00:21:13.126846 systemd-networkd[1407]: cali804435af531: Link UP Nov 8 00:21:13.127926 systemd-networkd[1407]: cali804435af531: Gained carrier Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:12.975 [INFO][4806] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0 calico-apiserver-6477c478b5- calico-apiserver 48366bf3-7c5b-44ee-9949-cb0f73b78d3c 1085 0 2025-11-08 00:20:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6477c478b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6477c478b5-v47ws eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali804435af531 [] [] }} ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-v47ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--v47ws-" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:12.977 [INFO][4806] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-v47ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.079 [INFO][4852] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" HandleID="k8s-pod-network.276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.080 [INFO][4852] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" HandleID="k8s-pod-network.276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6477c478b5-v47ws", "timestamp":"2025-11-08 00:21:13.079587376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.080 [INFO][4852] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.080 [INFO][4852] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.080 [INFO][4852] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.093 [INFO][4852] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" host="localhost" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.098 [INFO][4852] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.102 [INFO][4852] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.104 [INFO][4852] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.106 [INFO][4852] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.106 [INFO][4852] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" host="localhost" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.108 [INFO][4852] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.112 [INFO][4852] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" host="localhost" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.118 [INFO][4852] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" host="localhost" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.118 [INFO][4852] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" host="localhost" Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.118 [INFO][4852] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:13.146084 containerd[1462]: 2025-11-08 00:21:13.118 [INFO][4852] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" HandleID="k8s-pod-network.276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:13.146905 containerd[1462]: 2025-11-08 00:21:13.122 [INFO][4806] cni-plugin/k8s.go 418: Populated endpoint ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-v47ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0", GenerateName:"calico-apiserver-6477c478b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"48366bf3-7c5b-44ee-9949-cb0f73b78d3c", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6477c478b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6477c478b5-v47ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali804435af531", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:13.146905 containerd[1462]: 2025-11-08 00:21:13.123 [INFO][4806] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-v47ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:13.146905 containerd[1462]: 2025-11-08 00:21:13.123 [INFO][4806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali804435af531 ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-v47ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:13.146905 containerd[1462]: 2025-11-08 00:21:13.127 [INFO][4806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-v47ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:13.146905 containerd[1462]: 2025-11-08 00:21:13.130 [INFO][4806] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-v47ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0", GenerateName:"calico-apiserver-6477c478b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"48366bf3-7c5b-44ee-9949-cb0f73b78d3c", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6477c478b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d", Pod:"calico-apiserver-6477c478b5-v47ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali804435af531", MAC:"9a:12:9a:4c:2b:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:13.146905 containerd[1462]: 2025-11-08 00:21:13.138 [INFO][4806] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d" Namespace="calico-apiserver" Pod="calico-apiserver-6477c478b5-v47ws" WorkloadEndpoint="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:13.175136 containerd[1462]: time="2025-11-08T00:21:13.174985347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:13.175136 containerd[1462]: time="2025-11-08T00:21:13.175072802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:13.175136 containerd[1462]: time="2025-11-08T00:21:13.175092509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:13.176769 containerd[1462]: time="2025-11-08T00:21:13.175857121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:13.206080 systemd[1]: Started cri-containerd-276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d.scope - libcontainer container 276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d. Nov 8 00:21:13.230700 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:13.241538 systemd-networkd[1407]: cali39cd90c73d3: Link UP Nov 8 00:21:13.241776 systemd-networkd[1407]: cali39cd90c73d3: Gained carrier Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:12.979 [INFO][4813] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--4wpxj-eth0 goldmane-7c778bb748- calico-system 6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d 1083 0 2025-11-08 00:20:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-4wpxj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali39cd90c73d3 [] [] }} ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Namespace="calico-system" Pod="goldmane-7c778bb748-4wpxj" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4wpxj-" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:12.979 [INFO][4813] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Namespace="calico-system" Pod="goldmane-7c778bb748-4wpxj" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.091 [INFO][4853] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" HandleID="k8s-pod-network.c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.091 [INFO][4853] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" HandleID="k8s-pod-network.c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-4wpxj", "timestamp":"2025-11-08 00:21:13.090997513 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.091 [INFO][4853] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.118 [INFO][4853] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.119 [INFO][4853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.194 [INFO][4853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" host="localhost" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.200 [INFO][4853] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.208 [INFO][4853] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.211 [INFO][4853] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.214 [INFO][4853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.214 [INFO][4853] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" host="localhost" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.217 [INFO][4853] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394 Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.221 [INFO][4853] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" host="localhost" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.231 [INFO][4853] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" host="localhost" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.231 [INFO][4853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" host="localhost" Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.232 [INFO][4853] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:13.263358 containerd[1462]: 2025-11-08 00:21:13.232 [INFO][4853] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" HandleID="k8s-pod-network.c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:13.264070 containerd[1462]: 2025-11-08 00:21:13.238 [INFO][4813] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Namespace="calico-system" Pod="goldmane-7c778bb748-4wpxj" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--4wpxj-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-4wpxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39cd90c73d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:13.264070 containerd[1462]: 2025-11-08 00:21:13.238 [INFO][4813] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Namespace="calico-system" Pod="goldmane-7c778bb748-4wpxj" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:13.264070 containerd[1462]: 2025-11-08 00:21:13.238 [INFO][4813] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39cd90c73d3 ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Namespace="calico-system" Pod="goldmane-7c778bb748-4wpxj" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:13.264070 containerd[1462]: 2025-11-08 00:21:13.243 [INFO][4813] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Namespace="calico-system" Pod="goldmane-7c778bb748-4wpxj" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:13.264070 containerd[1462]: 2025-11-08 00:21:13.246 [INFO][4813] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Namespace="calico-system" Pod="goldmane-7c778bb748-4wpxj" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--4wpxj-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394", Pod:"goldmane-7c778bb748-4wpxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39cd90c73d3", MAC:"6a:af:de:57:9b:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:13.264070 containerd[1462]: 2025-11-08 00:21:13.258 [INFO][4813] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394" Namespace="calico-system" Pod="goldmane-7c778bb748-4wpxj" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:13.286351 containerd[1462]: time="2025-11-08T00:21:13.281237176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6477c478b5-v47ws,Uid:48366bf3-7c5b-44ee-9949-cb0f73b78d3c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d\"" Nov 8 00:21:13.291781 containerd[1462]: time="2025-11-08T00:21:13.291285896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:13.316490 containerd[1462]: time="2025-11-08T00:21:13.316239584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:13.316490 containerd[1462]: time="2025-11-08T00:21:13.316356344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:13.316490 containerd[1462]: time="2025-11-08T00:21:13.316387032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:13.318068 containerd[1462]: time="2025-11-08T00:21:13.316614290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:13.342927 systemd-networkd[1407]: cali3e386f8a733: Link UP Nov 8 00:21:13.343189 systemd-networkd[1407]: cali3e386f8a733: Gained carrier Nov 8 00:21:13.349119 systemd[1]: Started cri-containerd-c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394.scope - libcontainer container c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394. Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:12.975 [INFO][4835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rmjcb-eth0 csi-node-driver- calico-system a7e1a5e5-d1e7-4901-bce6-3563db023294 1086 0 2025-11-08 00:20:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rmjcb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3e386f8a733 [] [] }} ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Namespace="calico-system" Pod="csi-node-driver-rmjcb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rmjcb-" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:12.975 [INFO][4835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Namespace="calico-system" Pod="csi-node-driver-rmjcb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.094 [INFO][4849] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" HandleID="k8s-pod-network.f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.094 [INFO][4849] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" HandleID="k8s-pod-network.f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036b760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rmjcb", "timestamp":"2025-11-08 00:21:13.094327893 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.094 [INFO][4849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.236 [INFO][4849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.236 [INFO][4849] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.297 [INFO][4849] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" host="localhost" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.305 [INFO][4849] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.310 [INFO][4849] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.312 [INFO][4849] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.314 [INFO][4849] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.314 [INFO][4849] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" host="localhost" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.315 [INFO][4849] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178 Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.324 [INFO][4849] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" host="localhost" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.331 [INFO][4849] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" host="localhost" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.331 [INFO][4849] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" host="localhost" Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.331 [INFO][4849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:13.367048 containerd[1462]: 2025-11-08 00:21:13.331 [INFO][4849] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" HandleID="k8s-pod-network.f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:13.368000 containerd[1462]: 2025-11-08 00:21:13.338 [INFO][4835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Namespace="calico-system" Pod="csi-node-driver-rmjcb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rmjcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rmjcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7e1a5e5-d1e7-4901-bce6-3563db023294", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rmjcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e386f8a733", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:13.368000 containerd[1462]: 2025-11-08 00:21:13.338 [INFO][4835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Namespace="calico-system" Pod="csi-node-driver-rmjcb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:13.368000 containerd[1462]: 2025-11-08 00:21:13.339 [INFO][4835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e386f8a733 ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Namespace="calico-system" Pod="csi-node-driver-rmjcb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:13.368000 containerd[1462]: 2025-11-08 00:21:13.345 [INFO][4835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Namespace="calico-system" Pod="csi-node-driver-rmjcb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:13.368000 containerd[1462]: 2025-11-08 00:21:13.346 [INFO][4835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Namespace="calico-system" Pod="csi-node-driver-rmjcb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rmjcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rmjcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7e1a5e5-d1e7-4901-bce6-3563db023294", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178", Pod:"csi-node-driver-rmjcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e386f8a733", MAC:"4a:b5:3b:10:2f:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:13.368000 containerd[1462]: 2025-11-08 00:21:13.359 [INFO][4835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178" Namespace="calico-system" Pod="csi-node-driver-rmjcb" WorkloadEndpoint="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:13.379906 kubelet[2509]: E1108 00:21:13.379287 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:13.380350 kubelet[2509]: E1108 00:21:13.379910 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:13.382224 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:13.399352 containerd[1462]: time="2025-11-08T00:21:13.399163291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:13.399481 containerd[1462]: time="2025-11-08T00:21:13.399299077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:13.399512 containerd[1462]: time="2025-11-08T00:21:13.399470421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:13.399795 containerd[1462]: time="2025-11-08T00:21:13.399683222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:13.424275 systemd[1]: Started cri-containerd-f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178.scope - libcontainer container f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178. Nov 8 00:21:13.425337 containerd[1462]: time="2025-11-08T00:21:13.425273300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-4wpxj,Uid:6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d,Namespace:calico-system,Attempt:1,} returns sandbox id \"c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394\"" Nov 8 00:21:13.441006 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:21:13.457361 containerd[1462]: time="2025-11-08T00:21:13.457299064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rmjcb,Uid:a7e1a5e5-d1e7-4901-bce6-3563db023294,Namespace:calico-system,Attempt:1,} returns sandbox id \"f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178\"" Nov 8 00:21:13.629136 containerd[1462]: time="2025-11-08T00:21:13.628962005Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:13.804235 containerd[1462]: time="2025-11-08T00:21:13.804143651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:13.804389 containerd[1462]: time="2025-11-08T00:21:13.804178999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:13.804848 kubelet[2509]: E1108 00:21:13.804545 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:13.804848 kubelet[2509]: E1108 00:21:13.804622 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:13.805707 kubelet[2509]: E1108 00:21:13.805001 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6477c478b5-v47ws_calico-apiserver(48366bf3-7c5b-44ee-9949-cb0f73b78d3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:13.805707 kubelet[2509]: E1108 00:21:13.805051 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" podUID="48366bf3-7c5b-44ee-9949-cb0f73b78d3c" Nov 8 00:21:13.806313 containerd[1462]: time="2025-11-08T00:21:13.805936152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:21:14.183922 containerd[1462]: time="2025-11-08T00:21:14.183853563Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:14.185160 containerd[1462]: time="2025-11-08T00:21:14.185055418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:21:14.185208 containerd[1462]: time="2025-11-08T00:21:14.185123427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:14.185338 kubelet[2509]: E1108 00:21:14.185296 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:14.185403 kubelet[2509]: E1108 00:21:14.185351 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:14.185596 kubelet[2509]: E1108 00:21:14.185548 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4wpxj_calico-system(6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:14.185657 kubelet[2509]: E1108 00:21:14.185621 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4wpxj" podUID="6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d" Nov 8 00:21:14.185689 containerd[1462]: time="2025-11-08T00:21:14.185611127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:21:14.383625 kubelet[2509]: E1108 00:21:14.383569 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:14.384191 kubelet[2509]: E1108 00:21:14.384141 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" podUID="48366bf3-7c5b-44ee-9949-cb0f73b78d3c" Nov 8 00:21:14.384348 kubelet[2509]: E1108 00:21:14.384244 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4wpxj" podUID="6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d" Nov 8 00:21:14.565688 containerd[1462]: time="2025-11-08T00:21:14.565502904Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:14.566793 containerd[1462]: time="2025-11-08T00:21:14.566753933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:21:14.566876 containerd[1462]: time="2025-11-08T00:21:14.566828924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:21:14.567123 kubelet[2509]: E1108 00:21:14.567068 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:14.567188 kubelet[2509]: E1108 00:21:14.567131 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:14.567295 kubelet[2509]: E1108 00:21:14.567245 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rmjcb_calico-system(a7e1a5e5-d1e7-4901-bce6-3563db023294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:14.568370 containerd[1462]: time="2025-11-08T00:21:14.568339422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:21:14.690062 systemd-networkd[1407]: cali39cd90c73d3: Gained IPv6LL Nov 8 00:21:14.691134 systemd-networkd[1407]: cali804435af531: Gained IPv6LL Nov 8 00:21:14.833001 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:43734.service - OpenSSH per-connection server daemon (10.0.0.1:43734). Nov 8 00:21:14.878706 sshd[5039]: Accepted publickey for core from 10.0.0.1 port 43734 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:14.881064 sshd[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:14.885588 systemd-logind[1454]: New session 10 of user core. Nov 8 00:21:14.889021 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:21:14.915034 containerd[1462]: time="2025-11-08T00:21:14.914975781Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:14.994569 containerd[1462]: time="2025-11-08T00:21:14.994456264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:21:14.994569 containerd[1462]: time="2025-11-08T00:21:14.994523260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:21:14.994921 kubelet[2509]: E1108 00:21:14.994839 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:14.994968 kubelet[2509]: E1108 00:21:14.994923 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:14.995033 kubelet[2509]: E1108 00:21:14.995008 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rmjcb_calico-system(a7e1a5e5-d1e7-4901-bce6-3563db023294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:14.995112 kubelet[2509]: E1108 00:21:14.995052 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:21:15.107578 sshd[5039]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:15.112799 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:43734.service: Deactivated successfully. Nov 8 00:21:15.115753 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:21:15.117327 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:21:15.118718 systemd-logind[1454]: Removed session 10. Nov 8 00:21:15.266220 systemd-networkd[1407]: cali3e386f8a733: Gained IPv6LL Nov 8 00:21:15.386953 kubelet[2509]: E1108 00:21:15.386775 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4wpxj" podUID="6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d" Nov 8 00:21:15.387634 kubelet[2509]: E1108 00:21:15.387558 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:21:20.127648 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:56136.service - OpenSSH per-connection server daemon (10.0.0.1:56136). Nov 8 00:21:20.162908 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 56136 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:20.164588 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:20.169244 systemd-logind[1454]: New session 11 of user core. Nov 8 00:21:20.176039 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:21:20.348744 sshd[5064]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:20.360429 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:56136.service: Deactivated successfully. Nov 8 00:21:20.362656 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:21:20.364900 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:21:20.366632 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:56148.service - OpenSSH per-connection server daemon (10.0.0.1:56148). Nov 8 00:21:20.368223 systemd-logind[1454]: Removed session 11. Nov 8 00:21:20.400576 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 56148 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:20.402168 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:20.407121 systemd-logind[1454]: New session 12 of user core. Nov 8 00:21:20.414168 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:21:20.734595 sshd[5079]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:20.747152 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:56148.service: Deactivated successfully. Nov 8 00:21:20.754081 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:21:20.757707 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:21:20.764205 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:56164.service - OpenSSH per-connection server daemon (10.0.0.1:56164). Nov 8 00:21:20.765170 systemd-logind[1454]: Removed session 12. Nov 8 00:21:20.799576 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 56164 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:20.801322 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:20.805576 systemd-logind[1454]: New session 13 of user core. Nov 8 00:21:20.811047 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:21:20.931649 sshd[5091]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:20.935793 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:56164.service: Deactivated successfully. Nov 8 00:21:20.938353 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:21:20.939157 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:21:20.940117 systemd-logind[1454]: Removed session 13. Nov 8 00:21:21.172716 containerd[1462]: time="2025-11-08T00:21:21.172488240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:21.515894 containerd[1462]: time="2025-11-08T00:21:21.515710527Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:21.517003 containerd[1462]: time="2025-11-08T00:21:21.516955554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:21.517148 containerd[1462]: time="2025-11-08T00:21:21.517046305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:21.517266 kubelet[2509]: E1108 00:21:21.517196 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:21.517266 kubelet[2509]: E1108 00:21:21.517257 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:21.517780 kubelet[2509]: E1108 00:21:21.517355 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6477c478b5-xfnk2_calico-apiserver(779024e8-f065-402a-9618-c2d1616b455b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:21.517780 kubelet[2509]: E1108 00:21:21.517388 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" podUID="779024e8-f065-402a-9618-c2d1616b455b" Nov 8 00:21:22.172316 containerd[1462]: time="2025-11-08T00:21:22.172161773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:21:22.531427 containerd[1462]: time="2025-11-08T00:21:22.531234318Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:22.532581 containerd[1462]: time="2025-11-08T00:21:22.532541021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:21:22.532640 containerd[1462]: time="2025-11-08T00:21:22.532601205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:21:22.532886 kubelet[2509]: E1108 00:21:22.532813 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:22.533185 kubelet[2509]: E1108 00:21:22.532894 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:22.533185 kubelet[2509]: E1108 00:21:22.532991 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-56944ff74d-jfjjh_calico-system(6cb2e068-b098-433b-ba03-a3d8a7a50da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:22.534763 containerd[1462]: time="2025-11-08T00:21:22.534553704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:21:22.879169 containerd[1462]: time="2025-11-08T00:21:22.879005202Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:22.880304 containerd[1462]: time="2025-11-08T00:21:22.880259836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:21:22.880407 containerd[1462]: time="2025-11-08T00:21:22.880350357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:22.880609 kubelet[2509]: E1108 00:21:22.880542 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:22.880696 kubelet[2509]: E1108 00:21:22.880610 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:22.880731 kubelet[2509]: E1108 00:21:22.880703 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-56944ff74d-jfjjh_calico-system(6cb2e068-b098-433b-ba03-a3d8a7a50da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:22.880790 kubelet[2509]: E1108 00:21:22.880749 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56944ff74d-jfjjh" podUID="6cb2e068-b098-433b-ba03-a3d8a7a50da8" Nov 8 00:21:23.149036 containerd[1462]: time="2025-11-08T00:21:23.148921380Z" level=info msg="StopPodSandbox for \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\"" Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.222 [WARNING][5114] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0", GenerateName:"calico-apiserver-6477c478b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"779024e8-f065-402a-9618-c2d1616b455b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6477c478b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac", Pod:"calico-apiserver-6477c478b5-xfnk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80dbf584060", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.223 [INFO][5114] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.223 [INFO][5114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" iface="eth0" netns="" Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.223 [INFO][5114] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.223 [INFO][5114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.251 [INFO][5124] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" HandleID="k8s-pod-network.ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.251 [INFO][5124] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.251 [INFO][5124] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.259 [WARNING][5124] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" HandleID="k8s-pod-network.ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.259 [INFO][5124] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" HandleID="k8s-pod-network.ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.260 [INFO][5124] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:23.266037 containerd[1462]: 2025-11-08 00:21:23.263 [INFO][5114] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:23.266596 containerd[1462]: time="2025-11-08T00:21:23.266095956Z" level=info msg="TearDown network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\" successfully" Nov 8 00:21:23.266596 containerd[1462]: time="2025-11-08T00:21:23.266122657Z" level=info msg="StopPodSandbox for \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\" returns successfully" Nov 8 00:21:23.274193 containerd[1462]: time="2025-11-08T00:21:23.274157540Z" level=info msg="RemovePodSandbox for \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\"" Nov 8 00:21:23.276327 containerd[1462]: time="2025-11-08T00:21:23.276287795Z" level=info msg="Forcibly stopping sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\"" Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.309 [WARNING][5141] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0", GenerateName:"calico-apiserver-6477c478b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"779024e8-f065-402a-9618-c2d1616b455b", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6477c478b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"504b5ef75626edd4b162a228a455f5be798686764e91506f4670b3493c460fac", Pod:"calico-apiserver-6477c478b5-xfnk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80dbf584060", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.309 [INFO][5141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.310 [INFO][5141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" iface="eth0" netns="" Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.310 [INFO][5141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.310 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.331 [INFO][5150] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" HandleID="k8s-pod-network.ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.332 [INFO][5150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.332 [INFO][5150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.337 [WARNING][5150] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" HandleID="k8s-pod-network.ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.337 [INFO][5150] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" HandleID="k8s-pod-network.ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Workload="localhost-k8s-calico--apiserver--6477c478b5--xfnk2-eth0" Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.338 [INFO][5150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:23.343798 containerd[1462]: 2025-11-08 00:21:23.341 [INFO][5141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a" Nov 8 00:21:23.344416 containerd[1462]: time="2025-11-08T00:21:23.343848411Z" level=info msg="TearDown network for sandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\" successfully" Nov 8 00:21:23.561362 containerd[1462]: time="2025-11-08T00:21:23.561285580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:23.561846 containerd[1462]: time="2025-11-08T00:21:23.561380298Z" level=info msg="RemovePodSandbox \"ac3affa8d79b85209690bd1f3cabca9ce70208cb7deccbe134229a6d64b4313a\" returns successfully" Nov 8 00:21:23.561932 containerd[1462]: time="2025-11-08T00:21:23.561855384Z" level=info msg="StopPodSandbox for \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\"" Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.592 [WARNING][5169] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rmjcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7e1a5e5-d1e7-4901-bce6-3563db023294", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178", Pod:"csi-node-driver-rmjcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e386f8a733", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.593 [INFO][5169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.593 [INFO][5169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" iface="eth0" netns="" Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.593 [INFO][5169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.593 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.612 [INFO][5178] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" HandleID="k8s-pod-network.9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.612 [INFO][5178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.612 [INFO][5178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.618 [WARNING][5178] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" HandleID="k8s-pod-network.9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.618 [INFO][5178] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" HandleID="k8s-pod-network.9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.620 [INFO][5178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:23.625038 containerd[1462]: 2025-11-08 00:21:23.622 [INFO][5169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:23.625597 containerd[1462]: time="2025-11-08T00:21:23.625088454Z" level=info msg="TearDown network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\" successfully" Nov 8 00:21:23.625597 containerd[1462]: time="2025-11-08T00:21:23.625117087Z" level=info msg="StopPodSandbox for \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\" returns successfully" Nov 8 00:21:23.625662 containerd[1462]: time="2025-11-08T00:21:23.625623622Z" level=info msg="RemovePodSandbox for \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\"" Nov 8 00:21:23.625662 containerd[1462]: time="2025-11-08T00:21:23.625648599Z" level=info msg="Forcibly stopping sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\"" Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.660 [WARNING][5196] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rmjcb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7e1a5e5-d1e7-4901-bce6-3563db023294", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f11e18c9a0ed4345009b2db8e650518b585051336b5684df835b9482b602d178", Pod:"csi-node-driver-rmjcb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3e386f8a733", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.661 [INFO][5196] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.661 [INFO][5196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" iface="eth0" netns="" Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.661 [INFO][5196] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.661 [INFO][5196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.685 [INFO][5205] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" HandleID="k8s-pod-network.9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.685 [INFO][5205] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.685 [INFO][5205] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.691 [WARNING][5205] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" HandleID="k8s-pod-network.9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.691 [INFO][5205] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" HandleID="k8s-pod-network.9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Workload="localhost-k8s-csi--node--driver--rmjcb-eth0" Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.692 [INFO][5205] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:23.698286 containerd[1462]: 2025-11-08 00:21:23.695 [INFO][5196] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae" Nov 8 00:21:23.698926 containerd[1462]: time="2025-11-08T00:21:23.698351407Z" level=info msg="TearDown network for sandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\" successfully" Nov 8 00:21:23.754934 containerd[1462]: time="2025-11-08T00:21:23.754854462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:23.754934 containerd[1462]: time="2025-11-08T00:21:23.754934774Z" level=info msg="RemovePodSandbox \"9d608e3feb7969b176df72d386018b543af273dbad01baff4100e0c999f23fae\" returns successfully" Nov 8 00:21:23.755450 containerd[1462]: time="2025-11-08T00:21:23.755394069Z" level=info msg="StopPodSandbox for \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\"" Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.791 [WARNING][5223] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" WorkloadEndpoint="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.791 [INFO][5223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.791 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" iface="eth0" netns="" Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.791 [INFO][5223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.791 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.815 [INFO][5231] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" HandleID="k8s-pod-network.3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Workload="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.815 [INFO][5231] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.815 [INFO][5231] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.821 [WARNING][5231] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" HandleID="k8s-pod-network.3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Workload="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.821 [INFO][5231] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" HandleID="k8s-pod-network.3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Workload="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.822 [INFO][5231] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:23.828029 containerd[1462]: 2025-11-08 00:21:23.825 [INFO][5223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:23.828029 containerd[1462]: time="2025-11-08T00:21:23.827997851Z" level=info msg="TearDown network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\" successfully" Nov 8 00:21:23.828029 containerd[1462]: time="2025-11-08T00:21:23.828030191Z" level=info msg="StopPodSandbox for \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\" returns successfully" Nov 8 00:21:23.828586 containerd[1462]: time="2025-11-08T00:21:23.828556113Z" level=info msg="RemovePodSandbox for \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\"" Nov 8 00:21:23.828586 containerd[1462]: time="2025-11-08T00:21:23.828596088Z" level=info msg="Forcibly stopping sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\"" Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.862 [WARNING][5249] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" WorkloadEndpoint="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.863 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.863 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" iface="eth0" netns="" Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.863 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.863 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.885 [INFO][5258] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" HandleID="k8s-pod-network.3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Workload="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.885 [INFO][5258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.886 [INFO][5258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.891 [WARNING][5258] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" HandleID="k8s-pod-network.3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Workload="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.891 [INFO][5258] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" HandleID="k8s-pod-network.3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Workload="localhost-k8s-whisker--564f48998b--v7v74-eth0" Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.892 [INFO][5258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:23.898301 containerd[1462]: 2025-11-08 00:21:23.895 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0" Nov 8 00:21:23.898804 containerd[1462]: time="2025-11-08T00:21:23.898372550Z" level=info msg="TearDown network for sandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\" successfully" Nov 8 00:21:23.902984 containerd[1462]: time="2025-11-08T00:21:23.902928587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:23.903040 containerd[1462]: time="2025-11-08T00:21:23.902984593Z" level=info msg="RemovePodSandbox \"3cee131cf615d7e57e774e1648d5023ab382e2269f0feff953472329233dd7f0\" returns successfully" Nov 8 00:21:23.903521 containerd[1462]: time="2025-11-08T00:21:23.903494634Z" level=info msg="StopPodSandbox for \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\"" Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.937 [WARNING][5275] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0", GenerateName:"calico-apiserver-6477c478b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"48366bf3-7c5b-44ee-9949-cb0f73b78d3c", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6477c478b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d", Pod:"calico-apiserver-6477c478b5-v47ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali804435af531", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.937 [INFO][5275] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.937 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" iface="eth0" netns="" Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.937 [INFO][5275] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.937 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.957 [INFO][5284] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" HandleID="k8s-pod-network.d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.957 [INFO][5284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.957 [INFO][5284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.964 [WARNING][5284] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" HandleID="k8s-pod-network.d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.964 [INFO][5284] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" HandleID="k8s-pod-network.d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.965 [INFO][5284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:23.971071 containerd[1462]: 2025-11-08 00:21:23.968 [INFO][5275] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:23.971665 containerd[1462]: time="2025-11-08T00:21:23.971125242Z" level=info msg="TearDown network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\" successfully" Nov 8 00:21:23.971665 containerd[1462]: time="2025-11-08T00:21:23.971158334Z" level=info msg="StopPodSandbox for \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\" returns successfully" Nov 8 00:21:23.971929 containerd[1462]: time="2025-11-08T00:21:23.971813650Z" level=info msg="RemovePodSandbox for \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\"" Nov 8 00:21:23.971929 containerd[1462]: time="2025-11-08T00:21:23.971857252Z" level=info msg="Forcibly stopping sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\"" Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.008 [WARNING][5302] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0", GenerateName:"calico-apiserver-6477c478b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"48366bf3-7c5b-44ee-9949-cb0f73b78d3c", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6477c478b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"276141d6c225c664430022f3af6f7382ed58b5379b7a4fa7b47e28aaa5936a4d", Pod:"calico-apiserver-6477c478b5-v47ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali804435af531", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.008 [INFO][5302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.008 [INFO][5302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" iface="eth0" netns="" Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.008 [INFO][5302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.008 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.031 [INFO][5311] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" HandleID="k8s-pod-network.d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.031 [INFO][5311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.031 [INFO][5311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.039 [WARNING][5311] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" HandleID="k8s-pod-network.d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.039 [INFO][5311] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" HandleID="k8s-pod-network.d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Workload="localhost-k8s-calico--apiserver--6477c478b5--v47ws-eth0" Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.040 [INFO][5311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.047299 containerd[1462]: 2025-11-08 00:21:24.043 [INFO][5302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4" Nov 8 00:21:24.047944 containerd[1462]: time="2025-11-08T00:21:24.047351708Z" level=info msg="TearDown network for sandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\" successfully" Nov 8 00:21:24.052586 containerd[1462]: time="2025-11-08T00:21:24.052424950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:24.052586 containerd[1462]: time="2025-11-08T00:21:24.052492537Z" level=info msg="RemovePodSandbox \"d4527f884063aba23750bcaa446db456f8eb6de29e9773d484c988a6c00facd4\" returns successfully" Nov 8 00:21:24.053121 containerd[1462]: time="2025-11-08T00:21:24.053081146Z" level=info msg="StopPodSandbox for \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\"" Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.094 [WARNING][5328] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--4wpxj-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394", Pod:"goldmane-7c778bb748-4wpxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39cd90c73d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.094 [INFO][5328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.094 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" iface="eth0" netns="" Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.094 [INFO][5328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.094 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.116 [INFO][5339] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" HandleID="k8s-pod-network.94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.116 [INFO][5339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.116 [INFO][5339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.121 [WARNING][5339] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" HandleID="k8s-pod-network.94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.121 [INFO][5339] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" HandleID="k8s-pod-network.94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.123 [INFO][5339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.128840 containerd[1462]: 2025-11-08 00:21:24.125 [INFO][5328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:24.128840 containerd[1462]: time="2025-11-08T00:21:24.128791026Z" level=info msg="TearDown network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\" successfully" Nov 8 00:21:24.128840 containerd[1462]: time="2025-11-08T00:21:24.128820592Z" level=info msg="StopPodSandbox for \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\" returns successfully" Nov 8 00:21:24.129451 containerd[1462]: time="2025-11-08T00:21:24.129418859Z" level=info msg="RemovePodSandbox for \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\"" Nov 8 00:21:24.129488 containerd[1462]: time="2025-11-08T00:21:24.129450830Z" level=info msg="Forcibly stopping sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\"" Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.165 [WARNING][5357] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--4wpxj-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7c032f97900d61a69ab8e52267e8fcf625bb5c89ec3ee476acb2caf1ab2c394", Pod:"goldmane-7c778bb748-4wpxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39cd90c73d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.165 [INFO][5357] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.165 [INFO][5357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" iface="eth0" netns="" Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.165 [INFO][5357] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.165 [INFO][5357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.188 [INFO][5366] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" HandleID="k8s-pod-network.94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.188 [INFO][5366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.188 [INFO][5366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.196 [WARNING][5366] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" HandleID="k8s-pod-network.94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.196 [INFO][5366] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" HandleID="k8s-pod-network.94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Workload="localhost-k8s-goldmane--7c778bb748--4wpxj-eth0" Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.198 [INFO][5366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.204606 containerd[1462]: 2025-11-08 00:21:24.201 [INFO][5357] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02" Nov 8 00:21:24.205155 containerd[1462]: time="2025-11-08T00:21:24.204672768Z" level=info msg="TearDown network for sandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\" successfully" Nov 8 00:21:24.210631 containerd[1462]: time="2025-11-08T00:21:24.210565863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:24.210818 containerd[1462]: time="2025-11-08T00:21:24.210644722Z" level=info msg="RemovePodSandbox \"94179c98b0ac4872bdae6d4354fee44971132d102aaf3f6a36a0f194f3677e02\" returns successfully" Nov 8 00:21:24.211382 containerd[1462]: time="2025-11-08T00:21:24.211360158Z" level=info msg="StopPodSandbox for \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\"" Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.255 [WARNING][5382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8f6fz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9e0d5390-5a60-44e9-a40d-847919eb2c6d", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453", Pod:"coredns-66bc5c9577-8f6fz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief257d0dc87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.256 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.256 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" iface="eth0" netns="" Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.256 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.256 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.279 [INFO][5391] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" HandleID="k8s-pod-network.341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.279 [INFO][5391] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.279 [INFO][5391] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.287 [WARNING][5391] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" HandleID="k8s-pod-network.341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.287 [INFO][5391] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" HandleID="k8s-pod-network.341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.290 [INFO][5391] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.296686 containerd[1462]: 2025-11-08 00:21:24.293 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:24.297276 containerd[1462]: time="2025-11-08T00:21:24.296742979Z" level=info msg="TearDown network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\" successfully" Nov 8 00:21:24.297276 containerd[1462]: time="2025-11-08T00:21:24.296773246Z" level=info msg="StopPodSandbox for \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\" returns successfully" Nov 8 00:21:24.297475 containerd[1462]: time="2025-11-08T00:21:24.297432437Z" level=info msg="RemovePodSandbox for \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\"" Nov 8 00:21:24.297532 containerd[1462]: time="2025-11-08T00:21:24.297485438Z" level=info msg="Forcibly stopping sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\"" Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.335 [WARNING][5409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8f6fz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9e0d5390-5a60-44e9-a40d-847919eb2c6d", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bff449784c23bd2416fe15063c779a30e60578b2089d77b21758913772cc453", Pod:"coredns-66bc5c9577-8f6fz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief257d0dc87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.336 [INFO][5409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.336 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" iface="eth0" netns="" Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.336 [INFO][5409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.336 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.361 [INFO][5417] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" HandleID="k8s-pod-network.341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.361 [INFO][5417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.361 [INFO][5417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.366 [WARNING][5417] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" HandleID="k8s-pod-network.341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.366 [INFO][5417] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" HandleID="k8s-pod-network.341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Workload="localhost-k8s-coredns--66bc5c9577--8f6fz-eth0" Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.370 [INFO][5417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.377354 containerd[1462]: 2025-11-08 00:21:24.374 [INFO][5409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26" Nov 8 00:21:24.377811 containerd[1462]: time="2025-11-08T00:21:24.377422646Z" level=info msg="TearDown network for sandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\" successfully" Nov 8 00:21:24.391126 containerd[1462]: time="2025-11-08T00:21:24.391008725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:24.391126 containerd[1462]: time="2025-11-08T00:21:24.391071724Z" level=info msg="RemovePodSandbox \"341169dc53ea860232d607bb098762ee4580b6727d17c96ca5cd3fe61500ea26\" returns successfully" Nov 8 00:21:24.392191 containerd[1462]: time="2025-11-08T00:21:24.391682313Z" level=info msg="StopPodSandbox for \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\"" Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.427 [WARNING][5435] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--6z69l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6095f60b-9a5f-4061-ba74-c474c415b963", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d", Pod:"coredns-66bc5c9577-6z69l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bbc8317a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.427 [INFO][5435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.427 [INFO][5435] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" iface="eth0" netns="" Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.427 [INFO][5435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.427 [INFO][5435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.450 [INFO][5444] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" HandleID="k8s-pod-network.3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.450 [INFO][5444] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.450 [INFO][5444] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.458 [WARNING][5444] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" HandleID="k8s-pod-network.3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.459 [INFO][5444] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" HandleID="k8s-pod-network.3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.460 [INFO][5444] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.465886 containerd[1462]: 2025-11-08 00:21:24.463 [INFO][5435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:24.466318 containerd[1462]: time="2025-11-08T00:21:24.465935311Z" level=info msg="TearDown network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\" successfully" Nov 8 00:21:24.466318 containerd[1462]: time="2025-11-08T00:21:24.465966600Z" level=info msg="StopPodSandbox for \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\" returns successfully" Nov 8 00:21:24.466502 containerd[1462]: time="2025-11-08T00:21:24.466478714Z" level=info msg="RemovePodSandbox for \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\"" Nov 8 00:21:24.466534 containerd[1462]: time="2025-11-08T00:21:24.466508420Z" level=info msg="Forcibly stopping sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\"" Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.502 [WARNING][5462] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--6z69l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6095f60b-9a5f-4061-ba74-c474c415b963", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6f44a068d2b4c6400756c7b4bce9383deb57295361d75be88a419a25959768d", Pod:"coredns-66bc5c9577-6z69l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9bbc8317a2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.502 [INFO][5462] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.502 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" iface="eth0" netns="" Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.502 [INFO][5462] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.502 [INFO][5462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.525 [INFO][5470] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" HandleID="k8s-pod-network.3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.526 [INFO][5470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.526 [INFO][5470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.531 [WARNING][5470] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" HandleID="k8s-pod-network.3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.531 [INFO][5470] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" HandleID="k8s-pod-network.3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Workload="localhost-k8s-coredns--66bc5c9577--6z69l-eth0" Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.532 [INFO][5470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.538050 containerd[1462]: 2025-11-08 00:21:24.535 [INFO][5462] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21" Nov 8 00:21:24.538496 containerd[1462]: time="2025-11-08T00:21:24.538090779Z" level=info msg="TearDown network for sandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\" successfully" Nov 8 00:21:24.541984 containerd[1462]: time="2025-11-08T00:21:24.541954333Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:24.542059 containerd[1462]: time="2025-11-08T00:21:24.541999177Z" level=info msg="RemovePodSandbox \"3b649da8210211224629df88679131ace7c6749aebf9a52e5ce129f8ce38cd21\" returns successfully" Nov 8 00:21:24.542656 containerd[1462]: time="2025-11-08T00:21:24.542586784Z" level=info msg="StopPodSandbox for \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\"" Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.577 [WARNING][5487] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0", GenerateName:"calico-kube-controllers-564bf5b6db-", Namespace:"calico-system", SelfLink:"", UID:"e3df79a4-2d69-4d1b-a3d8-5080134a94f0", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564bf5b6db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646", Pod:"calico-kube-controllers-564bf5b6db-26fpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4d37c79f80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.578 [INFO][5487] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.578 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" iface="eth0" netns="" Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.578 [INFO][5487] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.578 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.602 [INFO][5495] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" HandleID="k8s-pod-network.a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.602 [INFO][5495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.602 [INFO][5495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.608 [WARNING][5495] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" HandleID="k8s-pod-network.a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.608 [INFO][5495] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" HandleID="k8s-pod-network.a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.609 [INFO][5495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.615071 containerd[1462]: 2025-11-08 00:21:24.612 [INFO][5487] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:24.616024 containerd[1462]: time="2025-11-08T00:21:24.615119742Z" level=info msg="TearDown network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\" successfully" Nov 8 00:21:24.616024 containerd[1462]: time="2025-11-08T00:21:24.615147856Z" level=info msg="StopPodSandbox for \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\" returns successfully" Nov 8 00:21:24.616024 containerd[1462]: time="2025-11-08T00:21:24.615771600Z" level=info msg="RemovePodSandbox for \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\"" Nov 8 00:21:24.616024 containerd[1462]: time="2025-11-08T00:21:24.615799863Z" level=info msg="Forcibly stopping sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\"" Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.651 [WARNING][5512] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0", GenerateName:"calico-kube-controllers-564bf5b6db-", Namespace:"calico-system", SelfLink:"", UID:"e3df79a4-2d69-4d1b-a3d8-5080134a94f0", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564bf5b6db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38ca716a3f40dc75b50d03d27a01e51614a7b62e5171ff86d7cc5718314dc646", Pod:"calico-kube-controllers-564bf5b6db-26fpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4d37c79f80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.651 [INFO][5512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.651 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" iface="eth0" netns="" Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.651 [INFO][5512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.651 [INFO][5512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.674 [INFO][5520] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" HandleID="k8s-pod-network.a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.674 [INFO][5520] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.674 [INFO][5520] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.680 [WARNING][5520] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" HandleID="k8s-pod-network.a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.680 [INFO][5520] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" HandleID="k8s-pod-network.a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Workload="localhost-k8s-calico--kube--controllers--564bf5b6db--26fpn-eth0" Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.682 [INFO][5520] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:24.687474 containerd[1462]: 2025-11-08 00:21:24.684 [INFO][5512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f" Nov 8 00:21:24.688028 containerd[1462]: time="2025-11-08T00:21:24.687521614Z" level=info msg="TearDown network for sandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\" successfully" Nov 8 00:21:24.691635 containerd[1462]: time="2025-11-08T00:21:24.691591036Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:24.691702 containerd[1462]: time="2025-11-08T00:21:24.691649096Z" level=info msg="RemovePodSandbox \"a7ddd2bcca2846a569aa4ce48bb3105a57627f9b8015665eb50928e7a8c8412f\" returns successfully" Nov 8 00:21:25.172477 containerd[1462]: time="2025-11-08T00:21:25.172422471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:21:25.516206 containerd[1462]: time="2025-11-08T00:21:25.516045645Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:25.605990 containerd[1462]: time="2025-11-08T00:21:25.605901979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:21:25.606162 containerd[1462]: time="2025-11-08T00:21:25.605927696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:25.606367 kubelet[2509]: E1108 00:21:25.606288 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:25.606830 kubelet[2509]: E1108 00:21:25.606370 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:25.606830 kubelet[2509]: E1108 00:21:25.606478 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-564bf5b6db-26fpn_calico-system(e3df79a4-2d69-4d1b-a3d8-5080134a94f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:25.606830 kubelet[2509]: E1108 00:21:25.606524 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" podUID="e3df79a4-2d69-4d1b-a3d8-5080134a94f0" Nov 8 00:21:25.946980 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:56174.service - OpenSSH per-connection server daemon (10.0.0.1:56174). Nov 8 00:21:25.984888 sshd[5529]: Accepted publickey for core from 10.0.0.1 port 56174 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:25.987024 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:25.991960 systemd-logind[1454]: New session 14 of user core. Nov 8 00:21:26.002045 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:21:26.124210 sshd[5529]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:26.129643 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:56174.service: Deactivated successfully. Nov 8 00:21:26.132459 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:21:26.133238 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:21:26.134575 systemd-logind[1454]: Removed session 14. Nov 8 00:21:26.172037 containerd[1462]: time="2025-11-08T00:21:26.171995552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:26.532293 containerd[1462]: time="2025-11-08T00:21:26.532238404Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:26.533347 containerd[1462]: time="2025-11-08T00:21:26.533296469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:26.533433 containerd[1462]: time="2025-11-08T00:21:26.533374685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:26.533627 kubelet[2509]: E1108 00:21:26.533565 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:26.533627 kubelet[2509]: E1108 00:21:26.533626 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:26.533753 kubelet[2509]: E1108 00:21:26.533723 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6477c478b5-v47ws_calico-apiserver(48366bf3-7c5b-44ee-9949-cb0f73b78d3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:26.533792 kubelet[2509]: E1108 00:21:26.533761 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" podUID="48366bf3-7c5b-44ee-9949-cb0f73b78d3c" Nov 8 00:21:27.171633 containerd[1462]: time="2025-11-08T00:21:27.171569538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:21:27.540488 containerd[1462]: time="2025-11-08T00:21:27.540310837Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:27.541566 containerd[1462]: time="2025-11-08T00:21:27.541507261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:21:27.541699 containerd[1462]: time="2025-11-08T00:21:27.541597088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:21:27.541808 kubelet[2509]: E1108 00:21:27.541771 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:27.542205 kubelet[2509]: E1108 00:21:27.541815 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:27.542205 kubelet[2509]: E1108 00:21:27.541906 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rmjcb_calico-system(a7e1a5e5-d1e7-4901-bce6-3563db023294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:27.542821 containerd[1462]: time="2025-11-08T00:21:27.542796527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:21:27.882781 containerd[1462]: time="2025-11-08T00:21:27.882616988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:27.883857 containerd[1462]: time="2025-11-08T00:21:27.883806909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:21:27.883946 containerd[1462]: time="2025-11-08T00:21:27.883900013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:21:27.884156 kubelet[2509]: E1108 00:21:27.884100 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:27.884211 kubelet[2509]: E1108 00:21:27.884167 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:27.884300 kubelet[2509]: E1108 00:21:27.884265 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rmjcb_calico-system(a7e1a5e5-d1e7-4901-bce6-3563db023294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:27.884431 kubelet[2509]: E1108 00:21:27.884337 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:21:28.171798 containerd[1462]: time="2025-11-08T00:21:28.171762758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:21:28.552967 containerd[1462]: time="2025-11-08T00:21:28.552781773Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:28.554079 containerd[1462]: time="2025-11-08T00:21:28.553997082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:21:28.554079 containerd[1462]: time="2025-11-08T00:21:28.554041596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:28.554330 kubelet[2509]: E1108 00:21:28.554278 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:28.554704 kubelet[2509]: E1108 00:21:28.554328 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:28.554704 kubelet[2509]: E1108 00:21:28.554422 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4wpxj_calico-system(6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:28.554704 kubelet[2509]: E1108 00:21:28.554450 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4wpxj" podUID="6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d" Nov 8 00:21:31.137317 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:57612.service - OpenSSH per-connection server daemon (10.0.0.1:57612). Nov 8 00:21:31.190591 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 57612 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:31.192307 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:31.197010 systemd-logind[1454]: New session 15 of user core. Nov 8 00:21:31.205041 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:21:31.321959 sshd[5550]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:31.326851 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:57612.service: Deactivated successfully. Nov 8 00:21:31.329658 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:21:31.330424 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:21:31.331471 systemd-logind[1454]: Removed session 15. Nov 8 00:21:35.171947 kubelet[2509]: E1108 00:21:35.171809 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" podUID="779024e8-f065-402a-9618-c2d1616b455b" Nov 8 00:21:36.339233 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:45900.service - OpenSSH per-connection server daemon (10.0.0.1:45900). Nov 8 00:21:36.377054 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 45900 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:36.379161 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:36.383518 systemd-logind[1454]: New session 16 of user core. Nov 8 00:21:36.396065 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:21:36.509538 sshd[5568]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:36.514410 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:45900.service: Deactivated successfully. Nov 8 00:21:36.516495 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:21:36.517263 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:21:36.518335 systemd-logind[1454]: Removed session 16. Nov 8 00:21:37.171936 kubelet[2509]: E1108 00:21:37.171883 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" podUID="e3df79a4-2d69-4d1b-a3d8-5080134a94f0" Nov 8 00:21:37.172852 kubelet[2509]: E1108 00:21:37.172815 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56944ff74d-jfjjh" podUID="6cb2e068-b098-433b-ba03-a3d8a7a50da8" Nov 8 00:21:38.424211 kubelet[2509]: E1108 00:21:38.424170 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:40.171474 kubelet[2509]: E1108 00:21:40.171395 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" podUID="48366bf3-7c5b-44ee-9949-cb0f73b78d3c" Nov 8 00:21:41.172418 kubelet[2509]: E1108 00:21:41.172279 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:21:41.536127 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:45904.service - OpenSSH per-connection server daemon (10.0.0.1:45904). Nov 8 00:21:41.576671 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 45904 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:41.578587 sshd[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:41.584102 systemd-logind[1454]: New session 17 of user core. Nov 8 00:21:41.592065 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:21:41.730848 sshd[5605]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:41.742494 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:45904.service: Deactivated successfully. Nov 8 00:21:41.745195 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:21:41.747798 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:21:41.754247 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:45920.service - OpenSSH per-connection server daemon (10.0.0.1:45920). Nov 8 00:21:41.755174 systemd-logind[1454]: Removed session 17. Nov 8 00:21:41.788235 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 45920 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:41.790141 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:41.795344 systemd-logind[1454]: New session 18 of user core. Nov 8 00:21:41.808020 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:21:42.092705 sshd[5619]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:42.104153 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:45920.service: Deactivated successfully. Nov 8 00:21:42.106130 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:21:42.107769 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:21:42.116426 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:45928.service - OpenSSH per-connection server daemon (10.0.0.1:45928). Nov 8 00:21:42.117584 systemd-logind[1454]: Removed session 18. Nov 8 00:21:42.149177 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 45928 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:42.150844 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:42.155391 systemd-logind[1454]: New session 19 of user core. Nov 8 00:21:42.164991 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:21:42.171949 kubelet[2509]: E1108 00:21:42.171895 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4wpxj" podUID="6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d" Nov 8 00:21:42.685681 sshd[5632]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:42.698687 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:45928.service: Deactivated successfully. Nov 8 00:21:42.703367 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:21:42.708605 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:21:42.718615 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:45942.service - OpenSSH per-connection server daemon (10.0.0.1:45942). Nov 8 00:21:42.721581 systemd-logind[1454]: Removed session 19. Nov 8 00:21:42.761427 sshd[5650]: Accepted publickey for core from 10.0.0.1 port 45942 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:42.764208 sshd[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:42.774227 systemd-logind[1454]: New session 20 of user core. Nov 8 00:21:42.782091 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:21:43.048527 sshd[5650]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:43.064939 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:45942.service: Deactivated successfully. Nov 8 00:21:43.067338 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:21:43.068143 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:21:43.080117 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:45948.service - OpenSSH per-connection server daemon (10.0.0.1:45948). Nov 8 00:21:43.080746 systemd-logind[1454]: Removed session 20. Nov 8 00:21:43.111603 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 45948 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:43.113513 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:43.118733 systemd-logind[1454]: New session 21 of user core. Nov 8 00:21:43.129171 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:21:43.172675 kubelet[2509]: E1108 00:21:43.171991 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:43.263828 sshd[5662]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:43.268368 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:45948.service: Deactivated successfully. Nov 8 00:21:43.270727 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:21:43.271406 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:21:43.272389 systemd-logind[1454]: Removed session 21. Nov 8 00:21:48.282158 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:51350.service - OpenSSH per-connection server daemon (10.0.0.1:51350). Nov 8 00:21:48.324300 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 51350 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:48.326194 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:48.330746 systemd-logind[1454]: New session 22 of user core. Nov 8 00:21:48.348009 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:21:48.483800 sshd[5679]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:48.489733 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:51350.service: Deactivated successfully. Nov 8 00:21:48.492616 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:21:48.493501 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:21:48.494772 systemd-logind[1454]: Removed session 22. Nov 8 00:21:50.172013 kubelet[2509]: E1108 00:21:50.171965 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:50.173269 containerd[1462]: time="2025-11-08T00:21:50.173230703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:21:50.562817 containerd[1462]: time="2025-11-08T00:21:50.562662708Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:50.564094 containerd[1462]: time="2025-11-08T00:21:50.564004143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:21:50.564424 containerd[1462]: time="2025-11-08T00:21:50.564115031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:50.564473 kubelet[2509]: E1108 00:21:50.564343 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:50.564473 kubelet[2509]: E1108 00:21:50.564408 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:50.565197 kubelet[2509]: E1108 00:21:50.564643 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-564bf5b6db-26fpn_calico-system(e3df79a4-2d69-4d1b-a3d8-5080134a94f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:50.565197 kubelet[2509]: E1108 00:21:50.564699 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-564bf5b6db-26fpn" podUID="e3df79a4-2d69-4d1b-a3d8-5080134a94f0" Nov 8 00:21:50.565492 containerd[1462]: time="2025-11-08T00:21:50.565460362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:50.889548 containerd[1462]: time="2025-11-08T00:21:50.889311998Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:50.890845 containerd[1462]: time="2025-11-08T00:21:50.890747128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:50.890845 containerd[1462]: time="2025-11-08T00:21:50.890797593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:50.891192 kubelet[2509]: E1108 00:21:50.891127 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:50.891259 kubelet[2509]: E1108 00:21:50.891206 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:50.891356 kubelet[2509]: E1108 00:21:50.891323 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6477c478b5-xfnk2_calico-apiserver(779024e8-f065-402a-9618-c2d1616b455b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:50.891450 kubelet[2509]: E1108 00:21:50.891375 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-xfnk2" podUID="779024e8-f065-402a-9618-c2d1616b455b" Nov 8 00:21:52.172295 containerd[1462]: time="2025-11-08T00:21:52.172009233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:21:52.551349 containerd[1462]: time="2025-11-08T00:21:52.551109128Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:52.552853 containerd[1462]: time="2025-11-08T00:21:52.552780532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:21:52.553041 containerd[1462]: time="2025-11-08T00:21:52.552835214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:21:52.553226 kubelet[2509]: E1108 00:21:52.553173 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:52.553697 kubelet[2509]: E1108 00:21:52.553241 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:52.553697 kubelet[2509]: E1108 00:21:52.553350 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-56944ff74d-jfjjh_calico-system(6cb2e068-b098-433b-ba03-a3d8a7a50da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:52.554362 containerd[1462]: time="2025-11-08T00:21:52.554327492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:21:52.921088 containerd[1462]: time="2025-11-08T00:21:52.921016778Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:52.922227 containerd[1462]: time="2025-11-08T00:21:52.922158038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:21:52.922227 containerd[1462]: time="2025-11-08T00:21:52.922199586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:52.922480 kubelet[2509]: E1108 00:21:52.922432 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:52.922575 kubelet[2509]: E1108 00:21:52.922488 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:52.922617 kubelet[2509]: E1108 00:21:52.922573 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-56944ff74d-jfjjh_calico-system(6cb2e068-b098-433b-ba03-a3d8a7a50da8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:52.922654 kubelet[2509]: E1108 00:21:52.922618 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-56944ff74d-jfjjh" podUID="6cb2e068-b098-433b-ba03-a3d8a7a50da8" Nov 8 00:21:53.503198 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:51362.service - OpenSSH per-connection server daemon (10.0.0.1:51362). Nov 8 00:21:53.535213 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 51362 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:53.537163 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:53.541683 systemd-logind[1454]: New session 23 of user core. Nov 8 00:21:53.552141 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:21:53.673058 sshd[5705]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:53.677951 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:51362.service: Deactivated successfully. Nov 8 00:21:53.680376 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:21:53.681098 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:21:53.682313 systemd-logind[1454]: Removed session 23. Nov 8 00:21:55.172852 containerd[1462]: time="2025-11-08T00:21:55.172797850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:55.513230 containerd[1462]: time="2025-11-08T00:21:55.513056478Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:55.514249 containerd[1462]: time="2025-11-08T00:21:55.514211334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:55.514330 containerd[1462]: time="2025-11-08T00:21:55.514282137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:55.514442 kubelet[2509]: E1108 00:21:55.514395 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:55.514838 kubelet[2509]: E1108 00:21:55.514443 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:55.514838 kubelet[2509]: E1108 00:21:55.514527 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6477c478b5-v47ws_calico-apiserver(48366bf3-7c5b-44ee-9949-cb0f73b78d3c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:55.514838 kubelet[2509]: E1108 00:21:55.514560 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6477c478b5-v47ws" podUID="48366bf3-7c5b-44ee-9949-cb0f73b78d3c" Nov 8 00:21:56.172112 containerd[1462]: time="2025-11-08T00:21:56.172066319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:21:56.532762 containerd[1462]: time="2025-11-08T00:21:56.532619029Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:56.533901 containerd[1462]: time="2025-11-08T00:21:56.533813699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:21:56.534060 containerd[1462]: time="2025-11-08T00:21:56.533877459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:21:56.534160 kubelet[2509]: E1108 00:21:56.534113 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:56.534514 kubelet[2509]: E1108 00:21:56.534173 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:56.534514 kubelet[2509]: E1108 00:21:56.534264 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rmjcb_calico-system(a7e1a5e5-d1e7-4901-bce6-3563db023294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:56.535131 containerd[1462]: time="2025-11-08T00:21:56.535111083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:21:56.905343 containerd[1462]: time="2025-11-08T00:21:56.905135919Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:56.906504 containerd[1462]: time="2025-11-08T00:21:56.906406862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:21:56.906671 containerd[1462]: time="2025-11-08T00:21:56.906496069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:21:56.906807 kubelet[2509]: E1108 00:21:56.906738 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:56.906882 kubelet[2509]: E1108 00:21:56.906812 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:56.906963 kubelet[2509]: E1108 00:21:56.906933 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rmjcb_calico-system(a7e1a5e5-d1e7-4901-bce6-3563db023294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:56.907045 kubelet[2509]: E1108 00:21:56.906987 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rmjcb" podUID="a7e1a5e5-d1e7-4901-bce6-3563db023294" Nov 8 00:21:57.174361 kubelet[2509]: E1108 00:21:57.174314 2509 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:21:57.175280 containerd[1462]: time="2025-11-08T00:21:57.175237126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:21:57.509427 containerd[1462]: time="2025-11-08T00:21:57.509278028Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:57.510664 containerd[1462]: time="2025-11-08T00:21:57.510600559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:21:57.510762 containerd[1462]: time="2025-11-08T00:21:57.510630886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:57.510918 kubelet[2509]: E1108 00:21:57.510881 2509 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:57.511002 kubelet[2509]: E1108 00:21:57.510924 2509 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:57.511037 kubelet[2509]: E1108 00:21:57.511001 2509 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-4wpxj_calico-system(6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:57.511077 kubelet[2509]: E1108 00:21:57.511030 2509 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-4wpxj" podUID="6d4ff612-553f-4b12-9b88-ad8ba2ea5f5d" Nov 8 00:21:58.683514 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:50554.service - OpenSSH per-connection server daemon (10.0.0.1:50554). Nov 8 00:21:58.741422 sshd[5719]: Accepted publickey for core from 10.0.0.1 port 50554 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:21:58.743291 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:58.748126 systemd-logind[1454]: New session 24 of user core. Nov 8 00:21:58.760265 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:21:58.885021 sshd[5719]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:58.889987 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:50554.service: Deactivated successfully. Nov 8 00:21:58.892434 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:21:58.893306 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:21:58.894808 systemd-logind[1454]: Removed session 24.