Jan 20 00:40:19.199153 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 00:40:19.199174 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:40:19.199192 kernel: BIOS-provided physical RAM map: Jan 20 00:40:19.199203 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 00:40:19.199213 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 00:40:19.199223 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 00:40:19.199235 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 20 00:40:19.199245 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 20 00:40:19.199255 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 00:40:19.199276 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 00:40:19.199282 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 00:40:19.199288 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 00:40:19.199294 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 00:40:19.199299 kernel: NX (Execute Disable) protection: active Jan 20 00:40:19.199306 kernel: APIC: Static calls initialized Jan 20 00:40:19.199315 kernel: SMBIOS 2.8 present. Jan 20 00:40:19.199321 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 20 00:40:19.199326 kernel: Hypervisor detected: KVM Jan 20 00:40:19.199451 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 00:40:19.199461 kernel: kvm-clock: using sched offset of 8425585304 cycles Jan 20 00:40:19.199471 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 00:40:19.199480 kernel: tsc: Detected 2445.424 MHz processor Jan 20 00:40:19.199490 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 00:40:19.199499 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 00:40:19.199514 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 20 00:40:19.199524 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 00:40:19.199535 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 00:40:19.199545 kernel: Using GB pages for direct mapping Jan 20 00:40:19.199554 kernel: ACPI: Early table checksum verification disabled Jan 20 00:40:19.199563 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 20 00:40:19.199572 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:40:19.199581 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:40:19.199591 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:40:19.199604 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 20 00:40:19.199613 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:40:19.199622 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:40:19.199631 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:40:19.199640 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 00:40:19.199649 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 20 00:40:19.199659 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 20 00:40:19.199674 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 20 00:40:19.199687 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 20 00:40:19.199697 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 20 00:40:19.199706 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 20 00:40:19.199716 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 20 00:40:19.199725 kernel: No NUMA configuration found Jan 20 00:40:19.199735 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 20 00:40:19.199748 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 20 00:40:19.199758 kernel: Zone ranges: Jan 20 00:40:19.199769 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 00:40:19.199781 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 20 00:40:19.199792 kernel: Normal empty Jan 20 00:40:19.199802 kernel: Movable zone start for each node Jan 20 00:40:19.199812 kernel: Early memory node ranges Jan 20 00:40:19.199822 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 00:40:19.199832 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 20 00:40:19.199842 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 20 00:40:19.199858 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 00:40:19.199868 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 00:40:19.199879 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 20 00:40:19.199888 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 00:40:19.199899 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 00:40:19.199909 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 00:40:19.199919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 00:40:19.199930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 00:40:19.199941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 00:40:19.199959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 00:40:19.199971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 00:40:19.199982 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 00:40:19.199994 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 00:40:19.200006 kernel: TSC deadline timer available Jan 20 00:40:19.200017 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 20 00:40:19.200029 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 00:40:19.200041 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 00:40:19.200053 kernel: kvm-guest: setup PV sched yield Jan 20 00:40:19.200069 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 00:40:19.200082 kernel: Booting paravirtualized kernel on KVM Jan 20 00:40:19.200094 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 00:40:19.200105 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 00:40:19.200117 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 20 00:40:19.200129 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 20 00:40:19.200140 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 00:40:19.200152 kernel: kvm-guest: PV spinlocks enabled Jan 20 00:40:19.200164 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 00:40:19.200183 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:40:19.200196 kernel: random: crng init done Jan 20 00:40:19.200207 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 00:40:19.200219 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 00:40:19.200230 kernel: Fallback order for Node 0: 0 Jan 20 00:40:19.200242 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 20 00:40:19.200254 kernel: Policy zone: DMA32 Jan 20 00:40:19.200265 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 00:40:19.200277 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 136884K reserved, 0K cma-reserved) Jan 20 00:40:19.200295 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 00:40:19.200307 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 00:40:19.200318 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 00:40:19.200452 kernel: Dynamic Preempt: voluntary Jan 20 00:40:19.200469 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 00:40:19.200483 kernel: rcu: RCU event tracing is enabled. Jan 20 00:40:19.200494 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 00:40:19.200508 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 00:40:19.200520 kernel: Rude variant of Tasks RCU enabled. Jan 20 00:40:19.200539 kernel: Tracing variant of Tasks RCU enabled. Jan 20 00:40:19.200550 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 00:40:19.200562 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 00:40:19.200574 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 00:40:19.200586 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 00:40:19.200597 kernel: Console: colour VGA+ 80x25 Jan 20 00:40:19.200608 kernel: printk: console [ttyS0] enabled Jan 20 00:40:19.200621 kernel: ACPI: Core revision 20230628 Jan 20 00:40:19.200633 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 00:40:19.200649 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 00:40:19.200661 kernel: x2apic enabled Jan 20 00:40:19.200674 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 00:40:19.200686 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 00:40:19.200698 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 00:40:19.200708 kernel: kvm-guest: setup PV IPIs Jan 20 00:40:19.200719 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 00:40:19.200747 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 20 00:40:19.200757 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 20 00:40:19.200767 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 00:40:19.200778 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 00:40:19.200788 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 00:40:19.200802 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 00:40:19.200812 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 00:40:19.200822 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 00:40:19.200832 kernel: Speculative Store Bypass: Vulnerable Jan 20 00:40:19.200846 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 00:40:19.200857 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 00:40:19.200867 kernel: active return thunk: srso_alias_return_thunk Jan 20 00:40:19.200877 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 00:40:19.200887 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 00:40:19.200897 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 00:40:19.200908 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 00:40:19.200919 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 00:40:19.200931 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 00:40:19.200945 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 00:40:19.200955 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 00:40:19.200965 kernel: Freeing SMP alternatives memory: 32K Jan 20 00:40:19.200975 kernel: pid_max: default: 32768 minimum: 301 Jan 20 00:40:19.200986 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 00:40:19.200996 kernel: landlock: Up and running. Jan 20 00:40:19.201006 kernel: SELinux: Initializing. Jan 20 00:40:19.201016 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:40:19.201026 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 00:40:19.201040 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 00:40:19.201050 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:40:19.201060 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:40:19.201071 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 00:40:19.201081 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 00:40:19.201091 kernel: signal: max sigframe size: 1776 Jan 20 00:40:19.201101 kernel: rcu: Hierarchical SRCU implementation. Jan 20 00:40:19.201111 kernel: rcu: Max phase no-delay instances is 400. Jan 20 00:40:19.201125 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 00:40:19.201135 kernel: smp: Bringing up secondary CPUs ... Jan 20 00:40:19.201145 kernel: smpboot: x86: Booting SMP configuration: Jan 20 00:40:19.201155 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 00:40:19.201165 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 00:40:19.201175 kernel: smpboot: Max logical packages: 1 Jan 20 00:40:19.201188 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 20 00:40:19.201200 kernel: devtmpfs: initialized Jan 20 00:40:19.201213 kernel: x86/mm: Memory block size: 128MB Jan 20 00:40:19.201224 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 00:40:19.201239 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 00:40:19.201249 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 00:40:19.201259 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 00:40:19.201270 kernel: audit: initializing netlink subsys (disabled) Jan 20 00:40:19.201280 kernel: audit: type=2000 audit(1768869617.104:1): state=initialized audit_enabled=0 res=1 Jan 20 00:40:19.201290 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 00:40:19.201300 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 00:40:19.201310 kernel: cpuidle: using governor menu Jan 20 00:40:19.201320 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 00:40:19.201419 kernel: dca service started, version 1.12.1 Jan 20 00:40:19.201431 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 00:40:19.201441 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 00:40:19.201452 kernel: PCI: Using configuration type 1 for base access Jan 20 00:40:19.201462 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 00:40:19.201472 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 00:40:19.201483 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 00:40:19.201493 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 00:40:19.201508 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 00:40:19.201518 kernel: ACPI: Added _OSI(Module Device) Jan 20 00:40:19.201528 kernel: ACPI: Added _OSI(Processor Device) Jan 20 00:40:19.201538 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 00:40:19.201549 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 00:40:19.201559 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 00:40:19.201569 kernel: ACPI: Interpreter enabled Jan 20 00:40:19.201579 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 00:40:19.201589 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 00:40:19.201600 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 00:40:19.201614 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 00:40:19.201624 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 00:40:19.201635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 00:40:19.201860 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 00:40:19.202029 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 00:40:19.202186 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 00:40:19.202203 kernel: PCI host bridge to bus 0000:00 Jan 20 00:40:19.202539 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 00:40:19.202692 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 00:40:19.202835 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 00:40:19.202975 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 20 00:40:19.203113 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 00:40:19.203274 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 20 00:40:19.203555 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 00:40:19.203767 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 00:40:19.203957 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 20 00:40:19.204148 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 20 00:40:19.204510 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 20 00:40:19.204704 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 20 00:40:19.204896 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 00:40:19.205059 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 20 00:40:19.205251 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 20 00:40:19.205486 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 20 00:40:19.205670 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 20 00:40:19.205808 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 20 00:40:19.206010 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 20 00:40:19.206156 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 20 00:40:19.206327 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 20 00:40:19.206566 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 20 00:40:19.206687 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 20 00:40:19.206844 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 20 00:40:19.206968 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 20 00:40:19.207088 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 20 00:40:19.207240 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 00:40:19.207478 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 00:40:19.207610 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 00:40:19.207748 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 20 00:40:19.207908 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 20 00:40:19.208039 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 00:40:19.208157 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 20 00:40:19.208172 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 00:40:19.208181 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 00:40:19.208194 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 00:40:19.208208 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 00:40:19.208219 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 00:40:19.208232 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 00:40:19.208245 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 00:40:19.208258 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 00:40:19.208266 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 00:40:19.208278 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 00:40:19.208284 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 00:40:19.208291 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 00:40:19.208297 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 00:40:19.208304 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 00:40:19.208310 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 00:40:19.208317 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 00:40:19.208323 kernel: iommu: Default domain type: Translated Jan 20 00:40:19.208410 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 00:40:19.208422 kernel: PCI: Using ACPI for IRQ routing Jan 20 00:40:19.208428 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 00:40:19.208435 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 00:40:19.208441 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 20 00:40:19.208592 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 00:40:19.208753 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 00:40:19.208885 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 00:40:19.208894 kernel: vgaarb: loaded Jan 20 00:40:19.208901 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 00:40:19.208913 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 00:40:19.208919 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 00:40:19.208926 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 00:40:19.208932 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 00:40:19.208939 kernel: pnp: PnP ACPI init Jan 20 00:40:19.209071 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 00:40:19.209082 kernel: pnp: PnP ACPI: found 6 devices Jan 20 00:40:19.209089 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 00:40:19.209100 kernel: NET: Registered PF_INET protocol family Jan 20 00:40:19.209106 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 00:40:19.209113 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 00:40:19.209120 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 00:40:19.209126 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 00:40:19.209133 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 00:40:19.209140 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 00:40:19.209146 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:40:19.209155 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 00:40:19.209162 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 00:40:19.209168 kernel: NET: Registered PF_XDP protocol family Jan 20 00:40:19.209317 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 00:40:19.209575 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 00:40:19.209687 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 00:40:19.209795 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 20 00:40:19.209902 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 00:40:19.210007 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 20 00:40:19.210021 kernel: PCI: CLS 0 bytes, default 64 Jan 20 00:40:19.210028 kernel: Initialise system trusted keyrings Jan 20 00:40:19.210035 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 00:40:19.210042 kernel: Key type asymmetric registered Jan 20 00:40:19.210048 kernel: Asymmetric key parser 'x509' registered Jan 20 00:40:19.210054 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 00:40:19.210061 kernel: io scheduler mq-deadline registered Jan 20 00:40:19.210068 kernel: io scheduler kyber registered Jan 20 00:40:19.210074 kernel: io scheduler bfq registered Jan 20 00:40:19.210083 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 00:40:19.210091 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 00:40:19.210098 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 00:40:19.210104 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 00:40:19.210111 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 00:40:19.210118 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 00:40:19.210124 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 00:40:19.210131 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 00:40:19.210138 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 00:40:19.210301 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 00:40:19.210314 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 00:40:19.210521 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 00:40:19.210637 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T00:40:18 UTC (1768869618) Jan 20 00:40:19.210750 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 20 00:40:19.210759 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 00:40:19.210766 kernel: NET: Registered PF_INET6 protocol family Jan 20 00:40:19.210773 kernel: Segment Routing with IPv6 Jan 20 00:40:19.210784 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 00:40:19.210791 kernel: NET: Registered PF_PACKET protocol family Jan 20 00:40:19.210797 kernel: Key type dns_resolver registered Jan 20 00:40:19.210804 kernel: IPI shorthand broadcast: enabled Jan 20 00:40:19.210811 kernel: sched_clock: Marking stable (1166049752, 391684945)->(1944061588, -386326891) Jan 20 00:40:19.210817 kernel: registered taskstats version 1 Jan 20 00:40:19.210824 kernel: Loading compiled-in X.509 certificates Jan 20 00:40:19.210831 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 00:40:19.210837 kernel: Key type .fscrypt registered Jan 20 00:40:19.210846 kernel: Key type fscrypt-provisioning registered Jan 20 00:40:19.210853 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 00:40:19.210859 kernel: ima: Allocated hash algorithm: sha1 Jan 20 00:40:19.210866 kernel: ima: No architecture policies found Jan 20 00:40:19.210873 kernel: clk: Disabling unused clocks Jan 20 00:40:19.210879 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 00:40:19.210886 kernel: Write protecting the kernel read-only data: 36864k Jan 20 00:40:19.210892 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 00:40:19.210899 kernel: Run /init as init process Jan 20 00:40:19.210908 kernel: with arguments: Jan 20 00:40:19.210915 kernel: /init Jan 20 00:40:19.210921 kernel: with environment: Jan 20 00:40:19.210928 kernel: HOME=/ Jan 20 00:40:19.210934 kernel: TERM=linux Jan 20 00:40:19.210942 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:40:19.210951 systemd[1]: Detected virtualization kvm. Jan 20 00:40:19.210961 systemd[1]: Detected architecture x86-64. Jan 20 00:40:19.210968 systemd[1]: Running in initrd. Jan 20 00:40:19.210975 systemd[1]: No hostname configured, using default hostname. Jan 20 00:40:19.210981 systemd[1]: Hostname set to . Jan 20 00:40:19.210988 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:40:19.210995 systemd[1]: Queued start job for default target initrd.target. Jan 20 00:40:19.211002 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:40:19.211009 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:40:19.211020 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 00:40:19.211027 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:40:19.211034 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 00:40:19.211041 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 00:40:19.211049 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 00:40:19.211056 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 00:40:19.211063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:40:19.211073 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:40:19.211080 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:40:19.211087 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:40:19.211094 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:40:19.211114 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:40:19.211123 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:40:19.211130 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:40:19.211140 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 00:40:19.211147 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 00:40:19.211155 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:40:19.211162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:40:19.211169 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:40:19.211176 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:40:19.211189 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 00:40:19.211203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:40:19.211223 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 00:40:19.211237 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 00:40:19.211252 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:40:19.211261 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:40:19.211269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:40:19.211276 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 00:40:19.211283 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:40:19.211291 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 00:40:19.211302 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:40:19.211309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:40:19.211317 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:40:19.211452 systemd-journald[193]: Collecting audit messages is disabled. Jan 20 00:40:19.211476 systemd-journald[193]: Journal started Jan 20 00:40:19.211491 systemd-journald[193]: Runtime Journal (/run/log/journal/79be16be8d794bdeae132620776299d3) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:40:19.174776 systemd-modules-load[194]: Inserted module 'overlay' Jan 20 00:40:19.421476 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:40:19.421503 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 00:40:19.421515 kernel: Bridge firewalling registered Jan 20 00:40:19.228481 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 20 00:40:19.418319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:40:19.423770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:19.430629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:40:19.456783 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:40:19.458906 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:40:19.459935 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:40:19.484431 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:40:19.496655 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:40:19.505975 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:40:19.529719 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 00:40:19.532583 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:40:19.544324 dracut-cmdline[227]: dracut-dracut-053 Jan 20 00:40:19.548696 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 00:40:19.599673 systemd-resolved[229]: Positive Trust Anchors: Jan 20 00:40:19.599731 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:40:19.599777 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:40:19.603042 systemd-resolved[229]: Defaulting to hostname 'linux'. Jan 20 00:40:19.604599 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:40:19.611874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:40:19.655450 kernel: SCSI subsystem initialized Jan 20 00:40:19.665480 kernel: Loading iSCSI transport class v2.0-870. Jan 20 00:40:19.679449 kernel: iscsi: registered transport (tcp) Jan 20 00:40:19.703808 kernel: iscsi: registered transport (qla4xxx) Jan 20 00:40:19.703925 kernel: QLogic iSCSI HBA Driver Jan 20 00:40:19.760879 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 00:40:19.779594 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 00:40:19.813476 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 00:40:19.813548 kernel: device-mapper: uevent: version 1.0.3 Jan 20 00:40:19.816511 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 00:40:19.869491 kernel: raid6: avx2x4 gen() 30562 MB/s Jan 20 00:40:19.887476 kernel: raid6: avx2x2 gen() 33426 MB/s Jan 20 00:40:19.907080 kernel: raid6: avx2x1 gen() 21361 MB/s Jan 20 00:40:19.907114 kernel: raid6: using algorithm avx2x2 gen() 33426 MB/s Jan 20 00:40:19.927582 kernel: raid6: .... xor() 27365 MB/s, rmw enabled Jan 20 00:40:19.927627 kernel: raid6: using avx2x2 recovery algorithm Jan 20 00:40:19.949487 kernel: xor: automatically using best checksumming function avx Jan 20 00:40:20.110512 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 00:40:20.128579 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:40:20.142737 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:40:20.158703 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 20 00:40:20.163695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:40:20.183657 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 00:40:20.201533 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Jan 20 00:40:20.246895 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:40:20.276869 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:40:20.365139 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:40:20.389622 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 00:40:20.410561 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 00:40:20.423245 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:40:20.433573 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:40:20.442039 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:40:20.448170 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 00:40:20.455499 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 00:40:20.461262 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 20 00:40:20.461684 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 00:40:20.467930 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:40:20.489271 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 00:40:20.489308 kernel: GPT:9289727 != 19775487 Jan 20 00:40:20.489326 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 00:40:20.489438 kernel: GPT:9289727 != 19775487 Jan 20 00:40:20.489457 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 00:40:20.489473 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:40:20.468038 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:40:20.489614 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:40:20.493141 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:40:20.493407 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:20.504131 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:40:20.513203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:40:20.540196 kernel: libata version 3.00 loaded. Jan 20 00:40:20.545843 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:40:20.556597 kernel: AVX2 version of gcm_enc/dec engaged. Jan 20 00:40:20.559431 kernel: AES CTR mode by8 optimization enabled Jan 20 00:40:20.562408 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 00:40:20.562644 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 00:40:20.567809 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 00:40:20.568058 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 00:40:20.573417 kernel: scsi host0: ahci Jan 20 00:40:20.573670 kernel: scsi host1: ahci Jan 20 00:40:20.577471 kernel: scsi host2: ahci Jan 20 00:40:20.577846 kernel: scsi host3: ahci Jan 20 00:40:20.584056 kernel: scsi host4: ahci Jan 20 00:40:20.587442 kernel: scsi host5: ahci Jan 20 00:40:20.587798 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 20 00:40:20.587819 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 20 00:40:20.587838 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 20 00:40:20.587854 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 20 00:40:20.587872 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 20 00:40:20.587887 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 20 00:40:20.599004 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 00:40:20.791611 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Jan 20 00:40:20.791652 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (457) Jan 20 00:40:20.791940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:20.804218 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 00:40:20.823561 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:40:20.835533 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 00:40:20.847963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 00:40:20.870604 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 00:40:20.880560 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 00:40:20.891993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:40:20.892016 disk-uuid[552]: Primary Header is updated. Jan 20 00:40:20.892016 disk-uuid[552]: Secondary Entries is updated. Jan 20 00:40:20.892016 disk-uuid[552]: Secondary Header is updated. Jan 20 00:40:20.903938 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:40:20.903961 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 00:40:20.907169 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 00:40:20.907194 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 00:40:20.912803 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:40:20.912840 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 00:40:20.916033 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:40:20.943836 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 00:40:20.943886 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 00:40:20.943901 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 00:40:20.943917 kernel: ata3.00: applying bridge limits Jan 20 00:40:20.943932 kernel: ata3.00: configured for UDMA/100 Jan 20 00:40:20.950640 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 00:40:21.026177 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 00:40:21.026528 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 00:40:21.041445 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 00:40:21.904504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 00:40:21.905314 disk-uuid[554]: The operation has completed successfully. Jan 20 00:40:21.951235 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 00:40:21.951527 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 00:40:21.981854 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 00:40:21.994556 sh[594]: Success Jan 20 00:40:22.018463 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 20 00:40:22.071299 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 00:40:22.096593 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 00:40:22.107659 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 00:40:22.128806 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 00:40:22.128877 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:40:22.128899 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 00:40:22.137275 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 00:40:22.137319 kernel: BTRFS info (device dm-0): using free space tree Jan 20 00:40:22.150936 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 00:40:22.154189 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 00:40:22.170660 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 00:40:22.178602 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 00:40:22.197269 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:40:22.197327 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:40:22.197426 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:40:22.204552 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:40:22.217823 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 00:40:22.226588 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:40:22.233595 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 00:40:22.247828 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 00:40:22.315804 ignition[664]: Ignition 2.19.0 Jan 20 00:40:22.315846 ignition[664]: Stage: fetch-offline Jan 20 00:40:22.315883 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:22.315893 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:40:22.315980 ignition[664]: parsed url from cmdline: "" Jan 20 00:40:22.315984 ignition[664]: no config URL provided Jan 20 00:40:22.315989 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 00:40:22.315998 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jan 20 00:40:22.316024 ignition[664]: op(1): [started] loading QEMU firmware config module Jan 20 00:40:22.316029 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 00:40:22.325806 ignition[664]: op(1): [finished] loading QEMU firmware config module Jan 20 00:40:22.425813 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:40:22.443649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:40:22.478012 systemd-networkd[784]: lo: Link UP Jan 20 00:40:22.478062 systemd-networkd[784]: lo: Gained carrier Jan 20 00:40:22.487529 systemd-networkd[784]: Enumeration completed Jan 20 00:40:22.488069 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:40:22.492838 systemd[1]: Reached target network.target - Network. Jan 20 00:40:22.511068 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:40:22.511108 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:40:22.529468 systemd-networkd[784]: eth0: Link UP Jan 20 00:40:22.529509 systemd-networkd[784]: eth0: Gained carrier Jan 20 00:40:22.529524 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:40:22.577496 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:40:22.610551 ignition[664]: parsing config with SHA512: 25ff84d267a2d9cc2196c7621180697e25bef92b972bef3d0de8fc9ba0d112e213269f8cb344c5ce8d8c2b2496ea6ff0e5bf97f354b4db68971758703c38348d Jan 20 00:40:22.617550 unknown[664]: fetched base config from "system" Jan 20 00:40:22.618109 unknown[664]: fetched user config from "qemu" Jan 20 00:40:22.618933 ignition[664]: fetch-offline: fetch-offline passed Jan 20 00:40:22.623000 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:40:22.619046 ignition[664]: Ignition finished successfully Jan 20 00:40:22.628771 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 00:40:22.641743 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 00:40:22.675525 ignition[789]: Ignition 2.19.0 Jan 20 00:40:22.675561 ignition[789]: Stage: kargs Jan 20 00:40:22.675743 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:22.675754 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:40:22.689275 ignition[789]: kargs: kargs passed Jan 20 00:40:22.689493 ignition[789]: Ignition finished successfully Jan 20 00:40:22.698235 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 00:40:22.718951 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 00:40:22.738189 ignition[797]: Ignition 2.19.0 Jan 20 00:40:22.738237 ignition[797]: Stage: disks Jan 20 00:40:22.738584 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:22.738603 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:40:22.755175 ignition[797]: disks: disks passed Jan 20 00:40:22.755320 ignition[797]: Ignition finished successfully Jan 20 00:40:22.764863 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 00:40:22.776663 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 00:40:22.782881 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 00:40:22.788648 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:40:22.799053 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:40:22.803871 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:40:22.823782 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 00:40:22.844575 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 20 00:40:22.850852 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 00:40:22.863728 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 00:40:23.004553 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 00:40:23.007013 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 00:40:23.011425 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 00:40:23.028772 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:40:23.033051 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 00:40:23.078070 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 20 00:40:23.078097 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:40:23.078108 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:40:23.078118 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:40:23.078127 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:40:23.042965 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 00:40:23.043024 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 00:40:23.043162 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:40:23.073050 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:40:23.078207 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 00:40:23.111889 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 00:40:23.175317 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 00:40:23.183843 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 20 00:40:23.191084 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 00:40:23.199049 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 00:40:24.413492 systemd-networkd[784]: eth0: Gained IPv6LL Jan 20 00:40:25.733515 kernel: hrtimer: interrupt took 3726087 ns Jan 20 00:40:28.096604 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 00:40:28.118552 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 00:40:28.127344 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 00:40:28.138759 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:40:28.130986 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 00:40:28.182824 ignition[927]: INFO : Ignition 2.19.0 Jan 20 00:40:28.182824 ignition[927]: INFO : Stage: mount Jan 20 00:40:28.189018 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:28.189018 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:40:28.189018 ignition[927]: INFO : mount: mount passed Jan 20 00:40:28.189018 ignition[927]: INFO : Ignition finished successfully Jan 20 00:40:28.186105 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 00:40:28.215730 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 00:40:28.227070 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 00:40:28.239284 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 00:40:28.275939 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Jan 20 00:40:28.275972 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 00:40:28.275988 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 00:40:28.276013 kernel: BTRFS info (device vda6): using free space tree Jan 20 00:40:28.285513 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 00:40:28.287432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 00:40:28.317054 ignition[959]: INFO : Ignition 2.19.0 Jan 20 00:40:28.317054 ignition[959]: INFO : Stage: files Jan 20 00:40:28.324621 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:28.324621 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:40:28.324621 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 20 00:40:28.324621 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 00:40:28.324621 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 00:40:28.324621 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 00:40:28.324621 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 00:40:28.373769 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 00:40:28.373769 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:40:28.373769 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 00:40:28.324880 unknown[959]: wrote ssh authorized keys file for user: core Jan 20 00:40:28.404144 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 00:40:28.585272 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 00:40:28.585272 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 00:40:28.599576 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 00:40:28.894492 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 00:40:31.209091 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 00:40:31.209091 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 00:40:31.228631 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:40:31.228631 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 00:40:31.228631 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 00:40:31.228631 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 00:40:31.228631 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:40:31.228631 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 00:40:31.228631 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 00:40:31.228631 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 00:40:31.721028 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:40:31.733164 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 00:40:31.740014 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 00:40:31.740014 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 00:40:31.740014 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 00:40:31.740014 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:40:31.740014 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 00:40:31.740014 ignition[959]: INFO : files: files passed Jan 20 00:40:31.740014 ignition[959]: INFO : Ignition finished successfully Jan 20 00:40:31.795905 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 00:40:31.813674 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 00:40:31.824652 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 00:40:31.826849 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 00:40:31.827031 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 00:40:31.877460 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 00:40:31.887999 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:40:31.887999 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:40:31.902559 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 00:40:31.910969 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:40:31.913083 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 00:40:31.930755 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 00:40:31.995043 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 00:40:31.995306 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 00:40:32.003204 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 00:40:32.006122 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 00:40:32.015443 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 00:40:32.016498 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 00:40:32.041576 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:40:32.073700 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 00:40:32.089939 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:40:32.099076 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:40:32.108932 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 00:40:32.115882 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 00:40:32.119050 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 00:40:32.128119 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 00:40:32.135835 systemd[1]: Stopped target basic.target - Basic System. Jan 20 00:40:32.142761 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 00:40:32.158291 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 00:40:32.166184 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 00:40:32.173636 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 00:40:32.180935 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 00:40:32.189322 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 00:40:32.196022 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 00:40:32.202650 systemd[1]: Stopped target swap.target - Swaps. Jan 20 00:40:32.208244 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 00:40:32.211566 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 00:40:32.219154 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:40:32.226347 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:40:32.234165 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 00:40:32.237268 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:40:32.248032 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 00:40:32.251485 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 00:40:32.258642 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 00:40:32.262100 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 00:40:32.270289 systemd[1]: Stopped target paths.target - Path Units. Jan 20 00:40:32.276770 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 00:40:32.280525 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:40:32.289613 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 00:40:32.295785 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 00:40:32.301904 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 00:40:32.304820 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 00:40:32.311225 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 00:40:32.314149 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 00:40:32.320868 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 00:40:32.324752 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 00:40:32.333116 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 00:40:32.336553 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 00:40:32.357712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 00:40:32.367903 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 00:40:32.372250 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 00:40:32.381254 ignition[1015]: INFO : Ignition 2.19.0 Jan 20 00:40:32.381254 ignition[1015]: INFO : Stage: umount Jan 20 00:40:32.381254 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 00:40:32.381254 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 00:40:32.381254 ignition[1015]: INFO : umount: umount passed Jan 20 00:40:32.381254 ignition[1015]: INFO : Ignition finished successfully Jan 20 00:40:32.381455 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:40:32.403559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 00:40:32.403757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 00:40:32.425034 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 00:40:32.429148 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 00:40:32.432598 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 00:40:32.442169 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 00:40:32.446206 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 00:40:32.454920 systemd[1]: Stopped target network.target - Network. Jan 20 00:40:32.458085 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 00:40:32.461006 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 00:40:32.470750 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 00:40:32.470846 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 00:40:32.482958 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 00:40:32.483074 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 00:40:32.495954 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 00:40:32.496058 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 00:40:32.507153 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 00:40:32.514933 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 00:40:32.524064 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 00:40:32.524252 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 00:40:32.524585 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 20 00:40:32.538461 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 00:40:32.538628 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 00:40:32.551229 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 00:40:32.555086 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 00:40:32.565732 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 00:40:32.565826 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:40:32.570354 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 00:40:32.580174 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 00:40:32.598641 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 00:40:32.600243 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 00:40:32.600331 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 00:40:32.607328 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 00:40:32.607496 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:40:32.617893 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 00:40:32.618033 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 00:40:32.622463 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 00:40:32.622519 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:40:32.635429 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:40:32.656062 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 00:40:32.656271 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 00:40:32.678807 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 00:40:32.679202 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:40:32.685523 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 00:40:32.685585 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 00:40:32.692521 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 00:40:32.692568 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:40:32.699917 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 00:40:32.699971 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 00:40:32.703927 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 00:40:32.703978 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 00:40:32.712565 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 00:40:32.712620 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 00:40:32.730580 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 00:40:32.738255 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 00:40:32.738496 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:40:32.749064 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 00:40:32.749128 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:40:32.755233 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 00:40:32.755295 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:40:32.766552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 00:40:32.766607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:32.772487 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 00:40:32.772631 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 00:40:32.782516 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 00:40:32.807620 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 00:40:32.864318 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 20 00:40:32.817323 systemd[1]: Switching root. Jan 20 00:40:32.867540 systemd-journald[193]: Journal stopped Jan 20 00:40:34.718066 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 00:40:34.718157 kernel: SELinux: policy capability open_perms=1 Jan 20 00:40:34.718171 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 00:40:34.718182 kernel: SELinux: policy capability always_check_network=0 Jan 20 00:40:34.718239 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 00:40:34.718251 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 00:40:34.718265 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 00:40:34.718275 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 00:40:34.718287 kernel: audit: type=1403 audit(1768869633.034:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 00:40:34.718308 systemd[1]: Successfully loaded SELinux policy in 62.047ms. Jan 20 00:40:34.718333 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.958ms. Jan 20 00:40:34.718345 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 00:40:34.718356 systemd[1]: Detected virtualization kvm. Jan 20 00:40:34.718448 systemd[1]: Detected architecture x86-64. Jan 20 00:40:34.718469 systemd[1]: Detected first boot. Jan 20 00:40:34.718484 systemd[1]: Initializing machine ID from VM UUID. Jan 20 00:40:34.718501 zram_generator::config[1059]: No configuration found. Jan 20 00:40:34.718520 systemd[1]: Populated /etc with preset unit settings. Jan 20 00:40:34.718531 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 00:40:34.718541 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 00:40:34.718552 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 00:40:34.718567 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 00:40:34.718577 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 00:40:34.718596 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 00:40:34.718615 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 00:40:34.718661 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 00:40:34.718681 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 00:40:34.718697 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 00:40:34.718708 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 00:40:34.718718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 00:40:34.718730 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 00:40:34.718741 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 00:40:34.718757 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 00:40:34.718777 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 00:40:34.718796 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 00:40:34.718807 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 00:40:34.718817 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 00:40:34.718828 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 00:40:34.718838 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 00:40:34.718849 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 00:40:34.718861 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 00:40:34.718885 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 00:40:34.718897 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 00:40:34.718908 systemd[1]: Reached target slices.target - Slice Units. Jan 20 00:40:34.718918 systemd[1]: Reached target swap.target - Swaps. Jan 20 00:40:34.718929 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 00:40:34.718943 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 00:40:34.718953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 00:40:34.718965 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 00:40:34.718990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 00:40:34.719010 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 00:40:34.719021 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 00:40:34.719031 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 00:40:34.719042 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 00:40:34.719053 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:40:34.719069 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 00:40:34.719086 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 00:40:34.719105 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 00:40:34.719122 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 00:40:34.719133 systemd[1]: Reached target machines.target - Containers. Jan 20 00:40:34.719144 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 00:40:34.719154 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:40:34.719165 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 00:40:34.719175 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 00:40:34.719192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:40:34.719210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:40:34.719228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:40:34.719240 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 00:40:34.719251 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:40:34.719262 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 00:40:34.719273 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 00:40:34.719283 kernel: fuse: init (API version 7.39) Jan 20 00:40:34.719293 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 00:40:34.719311 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 00:40:34.719329 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 00:40:34.719340 kernel: ACPI: bus type drm_connector registered Jan 20 00:40:34.719350 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 00:40:34.719424 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 00:40:34.719444 kernel: loop: module loaded Jan 20 00:40:34.719463 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 00:40:34.719483 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 00:40:34.719494 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 00:40:34.719505 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 00:40:34.719520 systemd[1]: Stopped verity-setup.service. Jan 20 00:40:34.719532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:40:34.719551 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 00:40:34.719582 systemd-journald[1143]: Collecting audit messages is disabled. Jan 20 00:40:34.719604 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 00:40:34.719617 systemd-journald[1143]: Journal started Jan 20 00:40:34.719651 systemd-journald[1143]: Runtime Journal (/run/log/journal/79be16be8d794bdeae132620776299d3) is 6.0M, max 48.4M, 42.3M free. Jan 20 00:40:33.776665 systemd[1]: Queued start job for default target multi-user.target. Jan 20 00:40:33.796874 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 00:40:33.797544 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 00:40:33.797938 systemd[1]: systemd-journald.service: Consumed 2.029s CPU time. Jan 20 00:40:34.729008 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 00:40:34.731970 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 00:40:34.735695 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 00:40:34.739316 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 00:40:34.744179 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 00:40:34.753708 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 00:40:34.759274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 00:40:34.764083 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 00:40:34.764307 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 00:40:34.768725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:40:34.768942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:40:34.773763 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:40:34.774110 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:40:34.778639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:40:34.778904 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:40:34.783318 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 00:40:34.783635 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 00:40:34.787778 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:40:34.787992 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:40:34.792704 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 00:40:34.798122 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 00:40:34.802704 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 00:40:34.821447 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 00:40:34.836522 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 00:40:34.842495 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 00:40:34.849297 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 00:40:34.849329 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 00:40:34.855468 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 00:40:34.860943 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 00:40:34.866495 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 00:40:34.870250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:40:34.872095 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 00:40:34.878525 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 00:40:34.882659 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:40:34.889516 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 00:40:34.893940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:40:34.902163 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 00:40:35.001569 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 00:40:35.022725 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 00:40:35.036762 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 00:40:35.070128 kernel: loop0: detected capacity change from 0 to 140768 Jan 20 00:40:35.070197 systemd-journald[1143]: Time spent on flushing to /var/log/journal/79be16be8d794bdeae132620776299d3 is 24.393ms for 947 entries. Jan 20 00:40:35.070197 systemd-journald[1143]: System Journal (/var/log/journal/79be16be8d794bdeae132620776299d3) is 8.0M, max 195.6M, 187.6M free. Jan 20 00:40:35.156216 systemd-journald[1143]: Received client request to flush runtime journal. Jan 20 00:40:35.156280 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 00:40:35.042637 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 00:40:35.052157 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 00:40:35.072246 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 00:40:35.082728 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 00:40:35.091914 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 00:40:35.104606 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 00:40:35.119559 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 00:40:35.124013 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 20 00:40:35.124026 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 20 00:40:35.138827 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 00:40:35.144354 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 00:40:35.164710 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 00:40:35.170552 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 00:40:35.180718 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 20 00:40:35.184657 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 00:40:35.185440 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 00:40:35.189080 kernel: loop1: detected capacity change from 0 to 142488 Jan 20 00:40:35.320825 kernel: loop2: detected capacity change from 0 to 219144 Jan 20 00:40:35.356116 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 00:40:35.374208 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 00:40:35.383442 kernel: loop3: detected capacity change from 0 to 140768 Jan 20 00:40:35.424712 kernel: loop4: detected capacity change from 0 to 142488 Jan 20 00:40:35.430792 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 20 00:40:35.430837 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jan 20 00:40:35.440856 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 00:40:35.745090 kernel: loop5: detected capacity change from 0 to 219144 Jan 20 00:40:35.773241 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 20 00:40:35.774477 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 20 00:40:35.785680 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 00:40:35.785842 systemd[1]: Reloading... Jan 20 00:40:36.037511 zram_generator::config[1226]: No configuration found. Jan 20 00:40:36.324999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:40:36.374873 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 00:40:36.401872 systemd[1]: Reloading finished in 615 ms. Jan 20 00:40:36.445284 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 00:40:36.452194 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 00:40:36.566518 systemd[1]: Starting ensure-sysext.service... Jan 20 00:40:36.572534 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 00:40:36.582150 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Jan 20 00:40:36.582338 systemd[1]: Reloading... Jan 20 00:40:36.836522 zram_generator::config[1288]: No configuration found. Jan 20 00:40:36.892050 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 00:40:36.892925 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 00:40:36.894661 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 00:40:36.895206 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 20 00:40:36.895355 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Jan 20 00:40:36.902982 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:40:36.902996 systemd-tmpfiles[1264]: Skipping /boot Jan 20 00:40:36.921539 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 00:40:36.921573 systemd-tmpfiles[1264]: Skipping /boot Jan 20 00:40:37.157522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:40:37.206550 systemd[1]: Reloading finished in 623 ms. Jan 20 00:40:37.228567 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 00:40:37.280124 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 00:40:37.304830 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:40:37.314557 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 00:40:37.323163 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 00:40:37.330751 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 00:40:37.340052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 00:40:37.364751 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 00:40:37.383743 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:40:37.384058 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:40:37.388338 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:40:37.396194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:40:37.411271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:40:37.419637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:40:37.420555 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:40:37.426642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:40:37.426985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:40:37.447036 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 00:40:37.454573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:40:37.454820 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:40:37.462552 augenrules[1355]: No rules Jan 20 00:40:37.470847 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:40:37.471134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:40:37.477062 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:40:37.482253 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 00:40:37.488557 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Jan 20 00:40:37.494177 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 00:40:37.506125 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:40:37.506541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 00:40:37.513144 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 00:40:37.524197 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 00:40:37.536723 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 00:40:37.543767 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 00:40:37.551790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 00:40:37.558728 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 00:40:37.564583 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 00:40:37.566211 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 00:40:37.571716 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 00:40:37.579161 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 00:40:37.586309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 00:40:37.587096 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 00:40:37.593174 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 00:40:37.593529 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 00:40:37.598071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 00:40:37.598302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 00:40:37.605935 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 00:40:37.606223 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 00:40:37.612011 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 00:40:37.627212 systemd[1]: Finished ensure-sysext.service. Jan 20 00:40:37.661739 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 00:40:37.664698 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 00:40:37.668343 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 00:40:37.686715 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 00:40:37.693891 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 00:40:37.738191 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 00:40:37.755429 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1377) Jan 20 00:40:38.071019 systemd-resolved[1339]: Positive Trust Anchors: Jan 20 00:40:38.071039 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 00:40:38.071085 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 00:40:38.082310 systemd-networkd[1401]: lo: Link UP Jan 20 00:40:38.083181 systemd-networkd[1401]: lo: Gained carrier Jan 20 00:40:38.084755 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 00:40:38.085952 systemd-networkd[1401]: Enumeration completed Jan 20 00:40:38.087248 systemd-resolved[1339]: Defaulting to hostname 'linux'. Jan 20 00:40:38.088447 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:40:38.088566 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 00:40:38.090904 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 00:40:38.090907 systemd-networkd[1401]: eth0: Link UP Jan 20 00:40:38.091165 systemd-networkd[1401]: eth0: Gained carrier Jan 20 00:40:38.091240 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:40:38.096809 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 00:40:38.104845 systemd[1]: Reached target network.target - Network. Jan 20 00:40:38.106661 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 00:40:38.110039 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 00:40:38.114995 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 00:40:38.120642 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 00:40:38.121982 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Jan 20 00:40:38.123586 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 00:40:38.123687 systemd-timesyncd[1404]: Initial clock synchronization to Tue 2026-01-20 00:40:38.508702 UTC. Jan 20 00:40:38.127815 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 00:40:38.141480 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 20 00:40:38.163496 kernel: ACPI: button: Power Button [PWRF] Jan 20 00:40:38.180974 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 00:40:38.181488 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 00:40:38.181809 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 00:40:38.235739 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 20 00:40:38.269083 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 00:40:38.307823 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 00:40:38.504881 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 00:40:38.532539 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 00:40:38.545265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 00:40:38.835286 kernel: kvm_amd: TSC scaling supported Jan 20 00:40:38.835460 kernel: kvm_amd: Nested Virtualization enabled Jan 20 00:40:38.835488 kernel: kvm_amd: Nested Paging enabled Jan 20 00:40:38.839499 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 00:40:38.839545 kernel: kvm_amd: PMU virtualization is disabled Jan 20 00:40:38.917482 kernel: EDAC MC: Ver: 3.0.0 Jan 20 00:40:38.961105 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 00:40:39.112951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 00:40:39.126951 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 00:40:39.150193 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:40:39.192823 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 00:40:39.199114 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 00:40:39.203868 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 00:40:39.208740 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 00:40:39.214337 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 00:40:39.219849 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 00:40:39.224864 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 00:40:39.229898 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 00:40:39.236162 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 00:40:39.236192 systemd[1]: Reached target paths.target - Path Units. Jan 20 00:40:39.240383 systemd[1]: Reached target timers.target - Timer Units. Jan 20 00:40:39.244675 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 00:40:39.251050 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 00:40:39.263296 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 00:40:39.269230 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 00:40:39.276109 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 00:40:39.278573 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 00:40:39.286221 systemd[1]: Reached target basic.target - Basic System. Jan 20 00:40:39.290913 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:40:39.290940 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 00:40:39.292927 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 00:40:39.297233 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 00:40:39.299056 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 00:40:39.306642 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 00:40:39.315638 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 00:40:39.324793 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 00:40:39.329962 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 00:40:39.334698 jq[1436]: false Jan 20 00:40:39.345848 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 00:40:39.350760 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 00:40:39.358712 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 00:40:39.370501 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 00:40:39.375680 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 00:40:39.376537 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 00:40:39.386631 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 00:40:39.391999 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 00:40:39.393358 extend-filesystems[1437]: Found loop3 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found loop4 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found loop5 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found sr0 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found vda Jan 20 00:40:39.399673 extend-filesystems[1437]: Found vda1 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found vda2 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found vda3 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found usr Jan 20 00:40:39.399673 extend-filesystems[1437]: Found vda4 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found vda6 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found vda7 Jan 20 00:40:39.399673 extend-filesystems[1437]: Found vda9 Jan 20 00:40:39.399673 extend-filesystems[1437]: Checking size of /dev/vda9 Jan 20 00:40:39.504978 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 20 00:40:39.396319 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 00:40:39.505209 extend-filesystems[1437]: Resized partition /dev/vda9 Jan 20 00:40:39.466279 dbus-daemon[1435]: [system] SELinux support is enabled Jan 20 00:40:39.396631 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 00:40:39.531765 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Jan 20 00:40:39.545713 jq[1445]: true Jan 20 00:40:39.401758 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 00:40:39.546158 tar[1449]: linux-amd64/LICENSE Jan 20 00:40:39.546158 tar[1449]: linux-amd64/helm Jan 20 00:40:39.402125 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 00:40:39.546772 update_engine[1444]: I20260120 00:40:39.485539 1444 main.cc:92] Flatcar Update Engine starting Jan 20 00:40:39.546772 update_engine[1444]: I20260120 00:40:39.508102 1444 update_check_scheduler.cc:74] Next update check in 10m25s Jan 20 00:40:39.416983 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 00:40:39.465190 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 00:40:39.471233 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 00:40:39.637042 jq[1457]: true Jan 20 00:40:39.479025 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 00:40:39.479053 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 00:40:39.482191 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 00:40:39.482210 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 00:40:39.510815 systemd[1]: Started update-engine.service - Update Engine. Jan 20 00:40:39.519912 systemd-networkd[1401]: eth0: Gained IPv6LL Jan 20 00:40:39.679788 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 20 00:40:39.679957 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1386) Jan 20 00:40:39.636952 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 00:40:39.637177 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 00:40:39.645694 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Jan 20 00:40:39.645717 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 00:40:39.648385 systemd-logind[1442]: New seat seat0. Jan 20 00:40:39.651817 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 00:40:39.656925 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 00:40:39.684181 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 00:40:39.684181 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 00:40:39.684181 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 20 00:40:39.683938 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 00:40:39.717607 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jan 20 00:40:39.688710 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 00:40:39.725940 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 00:40:39.861729 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 00:40:39.867667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:40:39.878733 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 00:40:39.894073 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jan 20 00:40:39.884947 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 00:40:39.899770 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 00:40:39.916738 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 00:40:39.974044 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 00:40:40.122472 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 00:40:40.122964 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 00:40:40.131194 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 00:40:40.175335 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 00:40:40.279828 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 00:40:40.330134 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 00:40:40.351797 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 00:40:40.366815 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 00:40:40.367587 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 00:40:40.386350 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 00:40:40.677988 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 00:40:40.698012 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 00:40:40.704821 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 00:40:40.710879 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 00:40:41.434995 containerd[1464]: time="2026-01-20T00:40:41.434521255Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 00:40:41.488216 containerd[1464]: time="2026-01-20T00:40:41.487889027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:40:41.665355 containerd[1464]: time="2026-01-20T00:40:41.665128346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:40:41.665355 containerd[1464]: time="2026-01-20T00:40:41.665304614Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 00:40:41.665355 containerd[1464]: time="2026-01-20T00:40:41.665513971Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 00:40:41.666187 containerd[1464]: time="2026-01-20T00:40:41.665972427Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 00:40:41.666187 containerd[1464]: time="2026-01-20T00:40:41.666017648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 00:40:41.666255 containerd[1464]: time="2026-01-20T00:40:41.666215098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:40:41.666255 containerd[1464]: time="2026-01-20T00:40:41.666230889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.666866767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.666885799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.666899065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.666908850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.667151605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.667857137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.668161597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.668179008Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.668349386Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 00:40:41.669164 containerd[1464]: time="2026-01-20T00:40:41.668737754Z" level=info msg="metadata content store policy set" policy=shared Jan 20 00:40:41.679191 containerd[1464]: time="2026-01-20T00:40:41.679108300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 00:40:41.679480 containerd[1464]: time="2026-01-20T00:40:41.679313573Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 00:40:41.679513 containerd[1464]: time="2026-01-20T00:40:41.679488118Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 00:40:41.680150 containerd[1464]: time="2026-01-20T00:40:41.679516000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 00:40:41.680150 containerd[1464]: time="2026-01-20T00:40:41.679566487Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 00:40:41.680150 containerd[1464]: time="2026-01-20T00:40:41.679812742Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 00:40:41.681940 containerd[1464]: time="2026-01-20T00:40:41.681363912Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 00:40:41.682116 containerd[1464]: time="2026-01-20T00:40:41.682039214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 00:40:41.682209 containerd[1464]: time="2026-01-20T00:40:41.682185637Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 00:40:41.682285 containerd[1464]: time="2026-01-20T00:40:41.682263342Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 00:40:41.682487 containerd[1464]: time="2026-01-20T00:40:41.682461458Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 00:40:41.682574 containerd[1464]: time="2026-01-20T00:40:41.682557020Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 00:40:41.682627 containerd[1464]: time="2026-01-20T00:40:41.682614499Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 00:40:41.682672 containerd[1464]: time="2026-01-20T00:40:41.682660871Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 00:40:41.682786 containerd[1464]: time="2026-01-20T00:40:41.682717062Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 00:40:41.682957 containerd[1464]: time="2026-01-20T00:40:41.682856202Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683033895Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683061777Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683313922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683345129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683367931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683389352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683563293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683591061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683605086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683647647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683660902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683676329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683687704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684482 containerd[1464]: time="2026-01-20T00:40:41.683698569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.683710911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.683748735Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.683824217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.683849679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.683907541Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.684103163Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.684135689Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.684216832Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.684272877Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.684283609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.684321941Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.684358404Z" level=info msg="NRI interface is disabled by configuration." Jan 20 00:40:41.684847 containerd[1464]: time="2026-01-20T00:40:41.684370766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 00:40:41.686622 containerd[1464]: time="2026-01-20T00:40:41.686381330Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 00:40:41.687539 containerd[1464]: time="2026-01-20T00:40:41.687514726Z" level=info msg="Connect containerd service" Jan 20 00:40:41.687722 containerd[1464]: time="2026-01-20T00:40:41.687703368Z" level=info msg="using legacy CRI server" Jan 20 00:40:41.687817 containerd[1464]: time="2026-01-20T00:40:41.687803646Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 00:40:41.689455 containerd[1464]: time="2026-01-20T00:40:41.688693914Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 00:40:41.691257 containerd[1464]: time="2026-01-20T00:40:41.691232609Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 00:40:41.691950 containerd[1464]: time="2026-01-20T00:40:41.691766215Z" level=info msg="Start subscribing containerd event" Jan 20 00:40:41.693772 containerd[1464]: time="2026-01-20T00:40:41.693747130Z" level=info msg="Start recovering state" Jan 20 00:40:41.694231 containerd[1464]: time="2026-01-20T00:40:41.694205930Z" level=info msg="Start event monitor" Jan 20 00:40:41.694467 containerd[1464]: time="2026-01-20T00:40:41.694387653Z" level=info msg="Start snapshots syncer" Jan 20 00:40:41.694634 containerd[1464]: time="2026-01-20T00:40:41.694618325Z" level=info msg="Start cni network conf syncer for default" Jan 20 00:40:41.694728 containerd[1464]: time="2026-01-20T00:40:41.694707769Z" level=info msg="Start streaming server" Jan 20 00:40:41.695389 containerd[1464]: time="2026-01-20T00:40:41.693291239Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 00:40:41.695622 containerd[1464]: time="2026-01-20T00:40:41.695605787Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 00:40:41.696122 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 00:40:41.702254 containerd[1464]: time="2026-01-20T00:40:41.700615757Z" level=info msg="containerd successfully booted in 0.275871s" Jan 20 00:40:41.716886 tar[1449]: linux-amd64/README.md Jan 20 00:40:41.750980 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 00:40:44.364061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:40:44.368959 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 00:40:44.373688 systemd[1]: Startup finished in 1.337s (kernel) + 14.257s (initrd) + 11.400s (userspace) = 26.996s. Jan 20 00:40:44.374089 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:40:44.962680 kubelet[1549]: E0120 00:40:44.962302 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:40:44.967349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:40:44.967894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:40:44.968377 systemd[1]: kubelet.service: Consumed 4.252s CPU time. Jan 20 00:40:49.196972 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 00:40:49.207860 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:53082.service - OpenSSH per-connection server daemon (10.0.0.1:53082). Jan 20 00:40:49.279110 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 53082 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:40:49.281709 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:49.295210 systemd-logind[1442]: New session 1 of user core. Jan 20 00:40:49.297110 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 00:40:49.311752 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 00:40:49.330493 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 00:40:49.346734 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 00:40:49.350514 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 00:40:49.472936 systemd[1566]: Queued start job for default target default.target. Jan 20 00:40:49.483883 systemd[1566]: Created slice app.slice - User Application Slice. Jan 20 00:40:49.483939 systemd[1566]: Reached target paths.target - Paths. Jan 20 00:40:49.483953 systemd[1566]: Reached target timers.target - Timers. Jan 20 00:40:49.485835 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 00:40:49.503002 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 00:40:49.503228 systemd[1566]: Reached target sockets.target - Sockets. Jan 20 00:40:49.503311 systemd[1566]: Reached target basic.target - Basic System. Jan 20 00:40:49.503368 systemd[1566]: Reached target default.target - Main User Target. Jan 20 00:40:49.503499 systemd[1566]: Startup finished in 143ms. Jan 20 00:40:49.503745 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 00:40:49.527838 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 00:40:49.601340 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:53090.service - OpenSSH per-connection server daemon (10.0.0.1:53090). Jan 20 00:40:49.653688 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 53090 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:40:49.656238 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:49.664220 systemd-logind[1442]: New session 2 of user core. Jan 20 00:40:49.676864 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 00:40:49.748788 sshd[1577]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:49.770275 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:53090.service: Deactivated successfully. Jan 20 00:40:49.774054 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 00:40:49.775955 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Jan 20 00:40:49.784048 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:53104.service - OpenSSH per-connection server daemon (10.0.0.1:53104). Jan 20 00:40:49.785474 systemd-logind[1442]: Removed session 2. Jan 20 00:40:49.825868 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 53104 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:40:49.827950 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:49.835951 systemd-logind[1442]: New session 3 of user core. Jan 20 00:40:49.849813 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 00:40:49.906089 sshd[1584]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:49.915940 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:53104.service: Deactivated successfully. Jan 20 00:40:49.917930 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 00:40:49.919720 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Jan 20 00:40:49.932065 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:53114.service - OpenSSH per-connection server daemon (10.0.0.1:53114). Jan 20 00:40:49.934843 systemd-logind[1442]: Removed session 3. Jan 20 00:40:49.977751 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 53114 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:40:49.980113 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:49.987066 systemd-logind[1442]: New session 4 of user core. Jan 20 00:40:50.000725 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 00:40:50.065561 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:50.077857 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:53114.service: Deactivated successfully. Jan 20 00:40:50.079883 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 00:40:50.081603 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Jan 20 00:40:50.098084 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:53118.service - OpenSSH per-connection server daemon (10.0.0.1:53118). Jan 20 00:40:50.099714 systemd-logind[1442]: Removed session 4. Jan 20 00:40:50.143150 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 53118 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:40:50.145801 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:50.153970 systemd-logind[1442]: New session 5 of user core. Jan 20 00:40:50.164682 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 00:40:50.234820 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 00:40:50.235693 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:40:50.260051 sudo[1601]: pam_unix(sudo:session): session closed for user root Jan 20 00:40:50.263052 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:50.275719 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:53118.service: Deactivated successfully. Jan 20 00:40:50.277328 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 00:40:50.279078 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Jan 20 00:40:50.280672 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:53122.service - OpenSSH per-connection server daemon (10.0.0.1:53122). Jan 20 00:40:50.281979 systemd-logind[1442]: Removed session 5. Jan 20 00:40:50.321711 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 53122 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:40:50.323772 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:50.329844 systemd-logind[1442]: New session 6 of user core. Jan 20 00:40:50.339607 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 00:40:50.400337 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 00:40:50.400983 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:40:50.406165 sudo[1610]: pam_unix(sudo:session): session closed for user root Jan 20 00:40:50.416503 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 00:40:50.417100 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:40:50.447873 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 00:40:50.451023 auditctl[1613]: No rules Jan 20 00:40:50.451332 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 00:40:50.451813 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 00:40:50.456276 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 00:40:50.633284 augenrules[1631]: No rules Jan 20 00:40:50.639239 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 00:40:50.645105 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 20 00:40:50.662363 sshd[1606]: pam_unix(sshd:session): session closed for user core Jan 20 00:40:50.672053 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:53130.service - OpenSSH per-connection server daemon (10.0.0.1:53130). Jan 20 00:40:50.698769 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:53122.service: Deactivated successfully. Jan 20 00:40:50.700768 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 00:40:50.716116 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 53130 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:40:50.718080 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:40:50.728348 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Jan 20 00:40:50.730346 systemd-logind[1442]: Removed session 6. Jan 20 00:40:50.735206 systemd-logind[1442]: New session 7 of user core. Jan 20 00:40:50.748774 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 00:40:50.810814 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 00:40:50.811295 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 00:40:51.135830 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 00:40:51.136027 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 00:40:51.468449 dockerd[1660]: time="2026-01-20T00:40:51.468108706Z" level=info msg="Starting up" Jan 20 00:40:51.612424 dockerd[1660]: time="2026-01-20T00:40:51.612237114Z" level=info msg="Loading containers: start." Jan 20 00:40:51.804456 kernel: Initializing XFRM netlink socket Jan 20 00:40:51.942172 systemd-networkd[1401]: docker0: Link UP Jan 20 00:40:51.970943 dockerd[1660]: time="2026-01-20T00:40:51.970841885Z" level=info msg="Loading containers: done." Jan 20 00:40:51.996465 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2467296463-merged.mount: Deactivated successfully. Jan 20 00:40:52.002240 dockerd[1660]: time="2026-01-20T00:40:52.002139168Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 00:40:52.002452 dockerd[1660]: time="2026-01-20T00:40:52.002332444Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 00:40:52.002634 dockerd[1660]: time="2026-01-20T00:40:52.002562112Z" level=info msg="Daemon has completed initialization" Jan 20 00:40:52.078026 dockerd[1660]: time="2026-01-20T00:40:52.077801603Z" level=info msg="API listen on /run/docker.sock" Jan 20 00:40:52.078007 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 00:40:52.953650 containerd[1464]: time="2026-01-20T00:40:52.953554300Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 00:40:53.530098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228381876.mount: Deactivated successfully. Jan 20 00:40:54.977085 containerd[1464]: time="2026-01-20T00:40:54.976922788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:40:54.978141 containerd[1464]: time="2026-01-20T00:40:54.978002551Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 20 00:40:54.980672 containerd[1464]: time="2026-01-20T00:40:54.980491350Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:40:54.988443 containerd[1464]: time="2026-01-20T00:40:54.987089185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:40:54.993081 containerd[1464]: time="2026-01-20T00:40:54.992994653Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.039321564s" Jan 20 00:40:54.993081 containerd[1464]: time="2026-01-20T00:40:54.993061690Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 20 00:40:54.994758 containerd[1464]: time="2026-01-20T00:40:54.994702841Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 00:40:55.040111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 00:40:55.050855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:40:56.164029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:40:56.171194 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:40:57.047763 kubelet[1877]: E0120 00:40:57.046177 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:40:57.059098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:40:57.059561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:40:57.061169 systemd[1]: kubelet.service: Consumed 2.426s CPU time. Jan 20 00:40:58.602976 containerd[1464]: time="2026-01-20T00:40:58.602770563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:40:58.604551 containerd[1464]: time="2026-01-20T00:40:58.604004872Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 20 00:40:58.606482 containerd[1464]: time="2026-01-20T00:40:58.606285437Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:40:58.611641 containerd[1464]: time="2026-01-20T00:40:58.611540574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:40:58.613709 containerd[1464]: time="2026-01-20T00:40:58.613506146Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 3.618724951s" Jan 20 00:40:58.613709 containerd[1464]: time="2026-01-20T00:40:58.613593148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 20 00:40:58.614840 containerd[1464]: time="2026-01-20T00:40:58.614763587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 00:41:01.157769 containerd[1464]: time="2026-01-20T00:41:01.157356494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:01.160198 containerd[1464]: time="2026-01-20T00:41:01.160038366Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 20 00:41:01.162459 containerd[1464]: time="2026-01-20T00:41:01.162259087Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:01.170886 containerd[1464]: time="2026-01-20T00:41:01.170838077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:01.173235 containerd[1464]: time="2026-01-20T00:41:01.173123602Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 2.558324351s" Jan 20 00:41:01.173235 containerd[1464]: time="2026-01-20T00:41:01.173212233Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 20 00:41:01.175022 containerd[1464]: time="2026-01-20T00:41:01.174942891Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 00:41:04.178253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount137901613.mount: Deactivated successfully. Jan 20 00:41:04.644061 containerd[1464]: time="2026-01-20T00:41:04.643960836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:04.645290 containerd[1464]: time="2026-01-20T00:41:04.645207807Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 20 00:41:04.647044 containerd[1464]: time="2026-01-20T00:41:04.646953043Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:04.651671 containerd[1464]: time="2026-01-20T00:41:04.651554468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:04.652642 containerd[1464]: time="2026-01-20T00:41:04.652540861Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 3.477502249s" Jan 20 00:41:04.652642 containerd[1464]: time="2026-01-20T00:41:04.652602972Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 00:41:04.653513 containerd[1464]: time="2026-01-20T00:41:04.653479303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 00:41:05.146686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081179164.mount: Deactivated successfully. Jan 20 00:41:06.152441 containerd[1464]: time="2026-01-20T00:41:06.152227590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:06.153704 containerd[1464]: time="2026-01-20T00:41:06.153096090Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 20 00:41:06.154989 containerd[1464]: time="2026-01-20T00:41:06.154853782Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:06.163878 containerd[1464]: time="2026-01-20T00:41:06.163796586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:06.167231 containerd[1464]: time="2026-01-20T00:41:06.167187786Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.513526427s" Jan 20 00:41:06.167331 containerd[1464]: time="2026-01-20T00:41:06.167234556Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 20 00:41:06.168159 containerd[1464]: time="2026-01-20T00:41:06.168100661Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 00:41:06.650668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794385612.mount: Deactivated successfully. Jan 20 00:41:06.660493 containerd[1464]: time="2026-01-20T00:41:06.660420377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:06.661680 containerd[1464]: time="2026-01-20T00:41:06.661574880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 20 00:41:06.662997 containerd[1464]: time="2026-01-20T00:41:06.662946020Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:06.666336 containerd[1464]: time="2026-01-20T00:41:06.666211644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:06.668019 containerd[1464]: time="2026-01-20T00:41:06.667937811Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 499.775694ms" Jan 20 00:41:06.668019 containerd[1464]: time="2026-01-20T00:41:06.667982212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 20 00:41:06.668750 containerd[1464]: time="2026-01-20T00:41:06.668715066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 00:41:07.160849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 00:41:07.167697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:07.173126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093747161.mount: Deactivated successfully. Jan 20 00:41:07.583782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:07.590301 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 00:41:07.878820 kubelet[1978]: E0120 00:41:07.875321 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 00:41:07.878499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 00:41:07.878770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 00:41:12.258103 containerd[1464]: time="2026-01-20T00:41:12.257889504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:12.259816 containerd[1464]: time="2026-01-20T00:41:12.258410739Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 20 00:41:12.260238 containerd[1464]: time="2026-01-20T00:41:12.260166118Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:12.264064 containerd[1464]: time="2026-01-20T00:41:12.263996544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:12.265486 containerd[1464]: time="2026-01-20T00:41:12.265418864Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 5.596604874s" Jan 20 00:41:12.265486 containerd[1464]: time="2026-01-20T00:41:12.265458586Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 20 00:41:17.072444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:17.087850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:17.139683 systemd[1]: Reloading requested from client PID 2058 ('systemctl') (unit session-7.scope)... Jan 20 00:41:17.139722 systemd[1]: Reloading... Jan 20 00:41:17.302559 zram_generator::config[2096]: No configuration found. Jan 20 00:41:17.516500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:41:17.627789 systemd[1]: Reloading finished in 487 ms. Jan 20 00:41:17.708905 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 00:41:17.709172 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 00:41:17.709900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:17.727206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:17.988760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:17.995604 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:41:18.375448 kubelet[2146]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:41:18.375448 kubelet[2146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:41:18.377608 kubelet[2146]: I0120 00:41:18.377248 2146 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:41:18.662251 kubelet[2146]: I0120 00:41:18.661614 2146 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 00:41:18.662251 kubelet[2146]: I0120 00:41:18.661718 2146 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:41:18.662251 kubelet[2146]: I0120 00:41:18.661946 2146 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 00:41:18.662251 kubelet[2146]: I0120 00:41:18.661976 2146 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:41:18.663229 kubelet[2146]: I0120 00:41:18.663126 2146 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:41:18.749296 kubelet[2146]: E0120 00:41:18.749126 2146 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:41:18.750118 kubelet[2146]: I0120 00:41:18.749986 2146 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:41:18.760441 kubelet[2146]: E0120 00:41:18.757678 2146 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:41:18.760441 kubelet[2146]: I0120 00:41:18.757791 2146 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 20 00:41:18.792954 kubelet[2146]: I0120 00:41:18.792863 2146 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 00:41:18.795156 kubelet[2146]: I0120 00:41:18.795029 2146 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:41:18.795678 kubelet[2146]: I0120 00:41:18.795151 2146 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:41:18.795678 kubelet[2146]: I0120 00:41:18.795650 2146 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:41:18.795678 kubelet[2146]: I0120 00:41:18.795662 2146 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 00:41:18.796138 kubelet[2146]: I0120 00:41:18.795886 2146 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 00:41:18.799913 kubelet[2146]: I0120 00:41:18.799832 2146 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:41:18.801767 kubelet[2146]: I0120 00:41:18.801672 2146 kubelet.go:475] "Attempting to sync node with API server" Jan 20 00:41:18.801767 kubelet[2146]: I0120 00:41:18.801720 2146 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:41:18.801767 kubelet[2146]: I0120 00:41:18.801760 2146 kubelet.go:387] "Adding apiserver pod source" Jan 20 00:41:18.801901 kubelet[2146]: I0120 00:41:18.801799 2146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:41:18.802751 kubelet[2146]: E0120 00:41:18.802651 2146 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:41:18.803209 kubelet[2146]: E0120 00:41:18.803120 2146 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:41:18.805112 kubelet[2146]: I0120 00:41:18.805069 2146 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:41:18.805854 kubelet[2146]: I0120 00:41:18.805772 2146 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:41:18.805854 kubelet[2146]: I0120 00:41:18.805844 2146 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 00:41:18.805923 kubelet[2146]: W0120 00:41:18.805896 2146 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 00:41:18.811906 kubelet[2146]: I0120 00:41:18.811302 2146 server.go:1262] "Started kubelet" Jan 20 00:41:18.813823 kubelet[2146]: I0120 00:41:18.812865 2146 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:41:18.813823 kubelet[2146]: I0120 00:41:18.812949 2146 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 00:41:18.813823 kubelet[2146]: I0120 00:41:18.813127 2146 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:41:18.813823 kubelet[2146]: I0120 00:41:18.813261 2146 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:41:18.813823 kubelet[2146]: I0120 00:41:18.813448 2146 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:41:18.817793 kubelet[2146]: I0120 00:41:18.816770 2146 server.go:310] "Adding debug handlers to kubelet server" Jan 20 00:41:18.817793 kubelet[2146]: E0120 00:41:18.817158 2146 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:41:18.820123 kubelet[2146]: E0120 00:41:18.818689 2146 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:41:18.820123 kubelet[2146]: I0120 00:41:18.818775 2146 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 00:41:18.820123 kubelet[2146]: I0120 00:41:18.818936 2146 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 00:41:18.820123 kubelet[2146]: I0120 00:41:18.818983 2146 reconciler.go:29] "Reconciler: start to sync state" Jan 20 00:41:18.820123 kubelet[2146]: E0120 00:41:18.819430 2146 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:41:18.820123 kubelet[2146]: I0120 00:41:18.819470 2146 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:41:18.825887 kubelet[2146]: I0120 00:41:18.823762 2146 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:41:18.825887 kubelet[2146]: I0120 00:41:18.823881 2146 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:41:18.826170 kubelet[2146]: E0120 00:41:18.819893 2146 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4999db80ca3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:41:18.811236927 +0000 UTC m=+0.768967426,LastTimestamp:2026-01-20 00:41:18.811236927 +0000 UTC m=+0.768967426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:41:18.826941 kubelet[2146]: E0120 00:41:18.826900 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Jan 20 00:41:18.828628 kubelet[2146]: I0120 00:41:18.828549 2146 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:41:18.859960 kubelet[2146]: I0120 00:41:18.859822 2146 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:41:18.859960 kubelet[2146]: I0120 00:41:18.859906 2146 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:41:18.859960 kubelet[2146]: I0120 00:41:18.859929 2146 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:41:18.867773 kubelet[2146]: I0120 00:41:18.867675 2146 policy_none.go:49] "None policy: Start" Jan 20 00:41:18.867905 kubelet[2146]: I0120 00:41:18.867787 2146 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 00:41:18.867905 kubelet[2146]: I0120 00:41:18.867842 2146 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 00:41:18.870890 kubelet[2146]: I0120 00:41:18.870855 2146 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 00:41:18.871482 kubelet[2146]: I0120 00:41:18.871358 2146 policy_none.go:47] "Start" Jan 20 00:41:18.875650 kubelet[2146]: I0120 00:41:18.875605 2146 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 00:41:18.875650 kubelet[2146]: I0120 00:41:18.875652 2146 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 00:41:18.875748 kubelet[2146]: I0120 00:41:18.875688 2146 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 00:41:18.875783 kubelet[2146]: E0120 00:41:18.875744 2146 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:41:18.878302 kubelet[2146]: E0120 00:41:18.878100 2146 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:41:18.885583 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 00:41:18.924786 kubelet[2146]: E0120 00:41:18.919802 2146 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:41:18.972357 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 00:41:18.978976 kubelet[2146]: E0120 00:41:18.978793 2146 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 00:41:19.003797 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 00:41:19.020437 kubelet[2146]: E0120 00:41:19.020200 2146 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:41:19.021231 kubelet[2146]: E0120 00:41:19.021126 2146 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:41:19.021852 kubelet[2146]: I0120 00:41:19.021750 2146 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:41:19.021852 kubelet[2146]: I0120 00:41:19.021772 2146 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:41:19.025032 kubelet[2146]: I0120 00:41:19.024948 2146 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:41:19.027213 kubelet[2146]: E0120 00:41:19.027106 2146 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:41:19.027213 kubelet[2146]: E0120 00:41:19.027180 2146 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 00:41:19.028054 kubelet[2146]: E0120 00:41:19.027880 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Jan 20 00:41:19.245858 kubelet[2146]: I0120 00:41:19.242323 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:19.245858 kubelet[2146]: I0120 00:41:19.242670 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:19.245858 kubelet[2146]: I0120 00:41:19.242743 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:19.245858 kubelet[2146]: I0120 00:41:19.242935 2146 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:41:19.271090 kubelet[2146]: E0120 00:41:19.270779 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 20 00:41:19.381333 systemd[1]: Created slice kubepods-burstable-pod6c0ff9082b5a648737bb83feed134b46.slice - libcontainer container kubepods-burstable-pod6c0ff9082b5a648737bb83feed134b46.slice. Jan 20 00:41:19.399766 kubelet[2146]: E0120 00:41:19.399690 2146 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:41:19.403494 kubelet[2146]: E0120 00:41:19.403331 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:19.403454 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 20 00:41:19.407293 containerd[1464]: time="2026-01-20T00:41:19.406280545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c0ff9082b5a648737bb83feed134b46,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:19.407808 kubelet[2146]: E0120 00:41:19.406731 2146 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:41:19.414268 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 20 00:41:19.418955 kubelet[2146]: E0120 00:41:19.418887 2146 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:41:19.429545 kubelet[2146]: E0120 00:41:19.429457 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Jan 20 00:41:19.446851 kubelet[2146]: I0120 00:41:19.446739 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:41:19.446931 kubelet[2146]: I0120 00:41:19.446867 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:19.446954 kubelet[2146]: I0120 00:41:19.446942 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:19.447094 kubelet[2146]: I0120 00:41:19.446971 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:19.447157 kubelet[2146]: I0120 00:41:19.447077 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:19.447244 kubelet[2146]: I0120 00:41:19.447178 2146 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:19.473285 kubelet[2146]: I0120 00:41:19.473183 2146 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:41:19.474008 kubelet[2146]: E0120 00:41:19.473862 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 20 00:41:19.715350 kubelet[2146]: E0120 00:41:19.715129 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:19.721025 containerd[1464]: time="2026-01-20T00:41:19.720716198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:19.734649 kubelet[2146]: E0120 00:41:19.734560 2146 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 00:41:19.736694 kubelet[2146]: E0120 00:41:19.736634 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:19.738014 containerd[1464]: time="2026-01-20T00:41:19.737939734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:19.876827 kubelet[2146]: I0120 00:41:19.876755 2146 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:41:19.877609 kubelet[2146]: E0120 00:41:19.877508 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 20 00:41:19.896248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24732896.mount: Deactivated successfully. Jan 20 00:41:19.903292 containerd[1464]: time="2026-01-20T00:41:19.903132345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:41:19.907206 containerd[1464]: time="2026-01-20T00:41:19.907031242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 20 00:41:19.908471 containerd[1464]: time="2026-01-20T00:41:19.908318972Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:41:19.909901 containerd[1464]: time="2026-01-20T00:41:19.909791027Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:41:19.911022 containerd[1464]: time="2026-01-20T00:41:19.910916753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:41:19.912138 containerd[1464]: time="2026-01-20T00:41:19.912086053Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:41:19.913477 containerd[1464]: time="2026-01-20T00:41:19.913414234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 00:41:19.916112 containerd[1464]: time="2026-01-20T00:41:19.915889510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 00:41:19.917452 containerd[1464]: time="2026-01-20T00:41:19.917236759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.50001ms" Jan 20 00:41:19.920008 containerd[1464]: time="2026-01-20T00:41:19.919914535Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 198.744519ms" Jan 20 00:41:19.933207 containerd[1464]: time="2026-01-20T00:41:19.933112034Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 194.976473ms" Jan 20 00:41:19.964901 kubelet[2146]: E0120 00:41:19.964740 2146 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 00:41:20.079158 kubelet[2146]: E0120 00:41:20.079068 2146 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 00:41:20.244700 kubelet[2146]: E0120 00:41:20.244586 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" Jan 20 00:41:20.286190 kubelet[2146]: E0120 00:41:20.286047 2146 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 00:41:20.662758 containerd[1464]: time="2026-01-20T00:41:20.660323496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:41:20.662758 containerd[1464]: time="2026-01-20T00:41:20.662622896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:41:20.662758 containerd[1464]: time="2026-01-20T00:41:20.662639749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:20.664945 containerd[1464]: time="2026-01-20T00:41:20.664689726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:20.670486 containerd[1464]: time="2026-01-20T00:41:20.669686284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:41:20.670486 containerd[1464]: time="2026-01-20T00:41:20.669787096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:41:20.670486 containerd[1464]: time="2026-01-20T00:41:20.669806835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:20.670486 containerd[1464]: time="2026-01-20T00:41:20.669932569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:20.686327 kubelet[2146]: I0120 00:41:20.685722 2146 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:41:20.686327 kubelet[2146]: E0120 00:41:20.686287 2146 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jan 20 00:41:20.693339 containerd[1464]: time="2026-01-20T00:41:20.690321637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:41:20.693339 containerd[1464]: time="2026-01-20T00:41:20.691769200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:41:20.693339 containerd[1464]: time="2026-01-20T00:41:20.691856177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:20.693339 containerd[1464]: time="2026-01-20T00:41:20.692360407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:20.803319 kubelet[2146]: E0120 00:41:20.803191 2146 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 00:41:20.844597 systemd[1]: Started cri-containerd-aeeaa43151654d3bb5d2541e6eaeec1f181853c15c4bdbc67c97efe38066ec48.scope - libcontainer container aeeaa43151654d3bb5d2541e6eaeec1f181853c15c4bdbc67c97efe38066ec48. Jan 20 00:41:20.859596 systemd[1]: Started cri-containerd-17738eaf59ed47917b9d846f3e3b2e5650c7939ef1562a59b38a3148b089000a.scope - libcontainer container 17738eaf59ed47917b9d846f3e3b2e5650c7939ef1562a59b38a3148b089000a. Jan 20 00:41:20.862871 systemd[1]: Started cri-containerd-de9d1df30babd1c941c108cff802e611f3456ca9df5246ce9a56c4590478db32.scope - libcontainer container de9d1df30babd1c941c108cff802e611f3456ca9df5246ce9a56c4590478db32. Jan 20 00:41:21.355291 containerd[1464]: time="2026-01-20T00:41:21.355162111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeeaa43151654d3bb5d2541e6eaeec1f181853c15c4bdbc67c97efe38066ec48\"" Jan 20 00:41:21.359107 kubelet[2146]: E0120 00:41:21.358954 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:21.366711 containerd[1464]: time="2026-01-20T00:41:21.366675775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"17738eaf59ed47917b9d846f3e3b2e5650c7939ef1562a59b38a3148b089000a\"" Jan 20 00:41:21.369420 kubelet[2146]: E0120 00:41:21.369279 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:21.374757 containerd[1464]: time="2026-01-20T00:41:21.374706016Z" level=info msg="CreateContainer within sandbox \"aeeaa43151654d3bb5d2541e6eaeec1f181853c15c4bdbc67c97efe38066ec48\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 00:41:21.375922 containerd[1464]: time="2026-01-20T00:41:21.375823910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6c0ff9082b5a648737bb83feed134b46,Namespace:kube-system,Attempt:0,} returns sandbox id \"de9d1df30babd1c941c108cff802e611f3456ca9df5246ce9a56c4590478db32\"" Jan 20 00:41:21.377433 kubelet[2146]: E0120 00:41:21.376905 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:21.380173 containerd[1464]: time="2026-01-20T00:41:21.380079647Z" level=info msg="CreateContainer within sandbox \"17738eaf59ed47917b9d846f3e3b2e5650c7939ef1562a59b38a3148b089000a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 00:41:21.384629 containerd[1464]: time="2026-01-20T00:41:21.384576033Z" level=info msg="CreateContainer within sandbox \"de9d1df30babd1c941c108cff802e611f3456ca9df5246ce9a56c4590478db32\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 00:41:21.424574 containerd[1464]: time="2026-01-20T00:41:21.424236551Z" level=info msg="CreateContainer within sandbox \"17738eaf59ed47917b9d846f3e3b2e5650c7939ef1562a59b38a3148b089000a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c0bf02da053f69659f5022e64e2899de32f1e9bb39de016f334e22c640dd5092\"" Jan 20 00:41:21.429177 containerd[1464]: time="2026-01-20T00:41:21.427268900Z" level=info msg="StartContainer for \"c0bf02da053f69659f5022e64e2899de32f1e9bb39de016f334e22c640dd5092\"" Jan 20 00:41:21.429177 containerd[1464]: time="2026-01-20T00:41:21.428093073Z" level=info msg="CreateContainer within sandbox \"aeeaa43151654d3bb5d2541e6eaeec1f181853c15c4bdbc67c97efe38066ec48\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"50fc72b8eb9335510af1fe7bced39e667c59241378284bae78b33149f2a774f6\"" Jan 20 00:41:21.431670 containerd[1464]: time="2026-01-20T00:41:21.431550203Z" level=info msg="CreateContainer within sandbox \"de9d1df30babd1c941c108cff802e611f3456ca9df5246ce9a56c4590478db32\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2f43ee7212bcd1c6f1c0e7036e0b8c58e831500c3ecd72db55ee3c9bc0130dc9\"" Jan 20 00:41:21.432161 containerd[1464]: time="2026-01-20T00:41:21.432064490Z" level=info msg="StartContainer for \"50fc72b8eb9335510af1fe7bced39e667c59241378284bae78b33149f2a774f6\"" Jan 20 00:41:21.432161 containerd[1464]: time="2026-01-20T00:41:21.432135875Z" level=info msg="StartContainer for \"2f43ee7212bcd1c6f1c0e7036e0b8c58e831500c3ecd72db55ee3c9bc0130dc9\"" Jan 20 00:41:21.650779 systemd[1]: Started cri-containerd-50fc72b8eb9335510af1fe7bced39e667c59241378284bae78b33149f2a774f6.scope - libcontainer container 50fc72b8eb9335510af1fe7bced39e667c59241378284bae78b33149f2a774f6. Jan 20 00:41:21.662641 systemd[1]: Started cri-containerd-c0bf02da053f69659f5022e64e2899de32f1e9bb39de016f334e22c640dd5092.scope - libcontainer container c0bf02da053f69659f5022e64e2899de32f1e9bb39de016f334e22c640dd5092. Jan 20 00:41:21.702945 systemd[1]: Started cri-containerd-2f43ee7212bcd1c6f1c0e7036e0b8c58e831500c3ecd72db55ee3c9bc0130dc9.scope - libcontainer container 2f43ee7212bcd1c6f1c0e7036e0b8c58e831500c3ecd72db55ee3c9bc0130dc9. Jan 20 00:41:21.765273 containerd[1464]: time="2026-01-20T00:41:21.765176908Z" level=info msg="StartContainer for \"c0bf02da053f69659f5022e64e2899de32f1e9bb39de016f334e22c640dd5092\" returns successfully" Jan 20 00:41:21.773791 containerd[1464]: time="2026-01-20T00:41:21.773614095Z" level=info msg="StartContainer for \"50fc72b8eb9335510af1fe7bced39e667c59241378284bae78b33149f2a774f6\" returns successfully" Jan 20 00:41:21.790214 containerd[1464]: time="2026-01-20T00:41:21.790057231Z" level=info msg="StartContainer for \"2f43ee7212bcd1c6f1c0e7036e0b8c58e831500c3ecd72db55ee3c9bc0130dc9\" returns successfully" Jan 20 00:41:21.872741 kubelet[2146]: E0120 00:41:21.872630 2146 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="3.2s" Jan 20 00:41:22.077906 kubelet[2146]: E0120 00:41:22.077149 2146 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:41:22.077906 kubelet[2146]: E0120 00:41:22.077486 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:22.080934 kubelet[2146]: E0120 00:41:22.080912 2146 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:41:22.082343 kubelet[2146]: E0120 00:41:22.082320 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:22.084965 kubelet[2146]: E0120 00:41:22.084944 2146 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:41:22.085444 kubelet[2146]: E0120 00:41:22.085242 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:22.290580 kubelet[2146]: I0120 00:41:22.290107 2146 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:41:23.091360 kubelet[2146]: E0120 00:41:23.090032 2146 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:41:23.091360 kubelet[2146]: E0120 00:41:23.090276 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:23.091360 kubelet[2146]: E0120 00:41:23.091071 2146 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 00:41:23.091360 kubelet[2146]: E0120 00:41:23.091194 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:24.609442 update_engine[1444]: I20260120 00:41:24.607291 1444 update_attempter.cc:509] Updating boot flags... Jan 20 00:41:24.678489 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2440) Jan 20 00:41:24.978479 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2442) Jan 20 00:41:27.019577 kubelet[2146]: E0120 00:41:27.019520 2146 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 00:41:27.173294 kubelet[2146]: E0120 00:41:27.172975 2146 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4999db80ca3f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 00:41:18.811236927 +0000 UTC m=+0.768967426,LastTimestamp:2026-01-20 00:41:18.811236927 +0000 UTC m=+0.768967426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 00:41:27.177793 kubelet[2146]: I0120 00:41:27.177686 2146 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:41:27.177793 kubelet[2146]: E0120 00:41:27.177737 2146 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 00:41:27.229524 kubelet[2146]: I0120 00:41:27.228700 2146 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:27.247974 kubelet[2146]: E0120 00:41:27.247784 2146 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:27.247974 kubelet[2146]: I0120 00:41:27.247846 2146 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:27.250697 kubelet[2146]: E0120 00:41:27.250505 2146 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:27.251089 kubelet[2146]: I0120 00:41:27.250853 2146 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:41:27.253204 kubelet[2146]: E0120 00:41:27.253176 2146 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 00:41:27.944136 kubelet[2146]: I0120 00:41:27.943084 2146 apiserver.go:52] "Watching apiserver" Jan 20 00:41:28.019703 kubelet[2146]: I0120 00:41:28.019569 2146 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 00:41:29.034809 kubelet[2146]: I0120 00:41:29.034356 2146 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:29.061329 kubelet[2146]: E0120 00:41:29.060907 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:29.741187 kubelet[2146]: E0120 00:41:29.741133 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:29.844465 kubelet[2146]: I0120 00:41:29.844265 2146 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:29.886456 kubelet[2146]: E0120 00:41:29.886150 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:30.503258 systemd[1]: Reloading requested from client PID 2450 ('systemctl') (unit session-7.scope)... Jan 20 00:41:30.503533 systemd[1]: Reloading... Jan 20 00:41:30.558182 kubelet[2146]: I0120 00:41:30.557841 2146 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:41:30.572497 kubelet[2146]: E0120 00:41:30.570968 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:30.895685 zram_generator::config[2489]: No configuration found. Jan 20 00:41:30.980706 kubelet[2146]: E0120 00:41:30.979842 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:30.980706 kubelet[2146]: E0120 00:41:30.980646 2146 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:31.486574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 00:41:31.693230 systemd[1]: Reloading finished in 1189 ms. Jan 20 00:41:31.880135 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:31.897203 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 00:41:31.897596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:31.897674 systemd[1]: kubelet.service: Consumed 6.193s CPU time, 133.4M memory peak, 0B memory swap peak. Jan 20 00:41:31.911747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 00:41:32.192249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 00:41:32.211829 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 00:41:32.335122 kubelet[2534]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 00:41:32.335122 kubelet[2534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 00:41:32.335122 kubelet[2534]: I0120 00:41:32.334276 2534 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 00:41:32.348453 kubelet[2534]: I0120 00:41:32.348303 2534 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 00:41:32.348453 kubelet[2534]: I0120 00:41:32.348329 2534 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 00:41:32.348453 kubelet[2534]: I0120 00:41:32.348361 2534 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 00:41:32.348649 kubelet[2534]: I0120 00:41:32.348447 2534 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 00:41:32.348823 kubelet[2534]: I0120 00:41:32.348750 2534 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 00:41:32.350645 kubelet[2534]: I0120 00:41:32.350598 2534 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 00:41:32.359079 kubelet[2534]: I0120 00:41:32.359017 2534 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 00:41:32.364877 kubelet[2534]: E0120 00:41:32.364815 2534 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 00:41:32.364877 kubelet[2534]: I0120 00:41:32.364864 2534 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 20 00:41:32.374820 kubelet[2534]: I0120 00:41:32.374587 2534 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 00:41:32.375260 kubelet[2534]: I0120 00:41:32.375179 2534 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 00:41:32.375541 kubelet[2534]: I0120 00:41:32.375237 2534 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 00:41:32.375705 kubelet[2534]: I0120 00:41:32.375608 2534 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 00:41:32.375705 kubelet[2534]: I0120 00:41:32.375624 2534 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 00:41:32.375751 kubelet[2534]: I0120 00:41:32.375717 2534 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 00:41:32.377135 kubelet[2534]: I0120 00:41:32.377083 2534 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:41:32.377661 kubelet[2534]: I0120 00:41:32.377620 2534 kubelet.go:475] "Attempting to sync node with API server" Jan 20 00:41:32.377661 kubelet[2534]: I0120 00:41:32.377640 2534 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 00:41:32.377855 kubelet[2534]: I0120 00:41:32.377674 2534 kubelet.go:387] "Adding apiserver pod source" Jan 20 00:41:32.377855 kubelet[2534]: I0120 00:41:32.377700 2534 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 00:41:32.380453 kubelet[2534]: I0120 00:41:32.379540 2534 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 00:41:32.380453 kubelet[2534]: I0120 00:41:32.380233 2534 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 00:41:32.380453 kubelet[2534]: I0120 00:41:32.380268 2534 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 00:41:32.391177 kubelet[2534]: I0120 00:41:32.391108 2534 server.go:1262] "Started kubelet" Jan 20 00:41:32.397461 kubelet[2534]: I0120 00:41:32.394421 2534 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 00:41:32.397573 kubelet[2534]: E0120 00:41:32.397553 2534 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 00:41:32.398302 kubelet[2534]: I0120 00:41:32.398257 2534 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 00:41:32.399350 kubelet[2534]: I0120 00:41:32.399277 2534 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 00:41:32.400785 kubelet[2534]: I0120 00:41:32.399462 2534 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 00:41:32.400785 kubelet[2534]: I0120 00:41:32.399574 2534 reconciler.go:29] "Reconciler: start to sync state" Jan 20 00:41:32.400785 kubelet[2534]: I0120 00:41:32.399853 2534 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 00:41:32.400785 kubelet[2534]: I0120 00:41:32.399893 2534 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 00:41:32.400785 kubelet[2534]: I0120 00:41:32.400160 2534 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 00:41:32.403310 kubelet[2534]: E0120 00:41:32.403109 2534 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 00:41:32.415531 kubelet[2534]: I0120 00:41:32.409344 2534 factory.go:223] Registration of the systemd container factory successfully Jan 20 00:41:32.417891 kubelet[2534]: I0120 00:41:32.417586 2534 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 00:41:32.420405 kubelet[2534]: I0120 00:41:32.419473 2534 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 00:41:32.424715 kubelet[2534]: I0120 00:41:32.424650 2534 server.go:310] "Adding debug handlers to kubelet server" Jan 20 00:41:32.452693 kubelet[2534]: I0120 00:41:32.452592 2534 factory.go:223] Registration of the containerd container factory successfully Jan 20 00:41:32.474221 kubelet[2534]: I0120 00:41:32.474166 2534 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 00:41:32.486166 kubelet[2534]: I0120 00:41:32.486131 2534 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 00:41:32.487255 kubelet[2534]: I0120 00:41:32.486850 2534 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 00:41:32.487536 kubelet[2534]: I0120 00:41:32.487519 2534 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 00:41:32.487723 kubelet[2534]: E0120 00:41:32.487702 2534 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 00:41:32.559653 kubelet[2534]: I0120 00:41:32.559134 2534 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 00:41:32.559653 kubelet[2534]: I0120 00:41:32.559166 2534 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 00:41:32.559653 kubelet[2534]: I0120 00:41:32.559190 2534 state_mem.go:36] "Initialized new in-memory state store" Jan 20 00:41:32.559653 kubelet[2534]: I0120 00:41:32.559530 2534 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 00:41:32.559653 kubelet[2534]: I0120 00:41:32.559548 2534 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 00:41:32.559653 kubelet[2534]: I0120 00:41:32.559605 2534 policy_none.go:49] "None policy: Start" Jan 20 00:41:32.559653 kubelet[2534]: I0120 00:41:32.559620 2534 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 00:41:32.559653 kubelet[2534]: I0120 00:41:32.559636 2534 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 00:41:32.560187 kubelet[2534]: I0120 00:41:32.559752 2534 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 00:41:32.560187 kubelet[2534]: I0120 00:41:32.559763 2534 policy_none.go:47] "Start" Jan 20 00:41:32.571018 kubelet[2534]: E0120 00:41:32.568998 2534 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 00:41:32.571018 kubelet[2534]: I0120 00:41:32.569310 2534 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 00:41:32.571018 kubelet[2534]: I0120 00:41:32.569325 2534 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 00:41:32.571018 kubelet[2534]: I0120 00:41:32.570231 2534 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 00:41:32.573329 kubelet[2534]: E0120 00:41:32.571723 2534 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 00:41:32.589235 kubelet[2534]: I0120 00:41:32.589123 2534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 00:41:32.592433 kubelet[2534]: I0120 00:41:32.589947 2534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:32.592433 kubelet[2534]: I0120 00:41:32.591153 2534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:32.600275 kubelet[2534]: I0120 00:41:32.600187 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:32.600275 kubelet[2534]: I0120 00:41:32.600262 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 00:41:32.600444 kubelet[2534]: I0120 00:41:32.600292 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:32.600444 kubelet[2534]: I0120 00:41:32.600315 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:32.600444 kubelet[2534]: I0120 00:41:32.600347 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:32.600535 kubelet[2534]: I0120 00:41:32.600450 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:32.600535 kubelet[2534]: I0120 00:41:32.600475 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:32.600535 kubelet[2534]: I0120 00:41:32.600497 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c0ff9082b5a648737bb83feed134b46-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6c0ff9082b5a648737bb83feed134b46\") " pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:32.600535 kubelet[2534]: I0120 00:41:32.600520 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:32.606116 kubelet[2534]: E0120 00:41:32.606026 2534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 00:41:32.606274 kubelet[2534]: E0120 00:41:32.606220 2534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 00:41:32.607189 kubelet[2534]: E0120 00:41:32.607125 2534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 00:41:32.682446 kubelet[2534]: I0120 00:41:32.682297 2534 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 00:41:32.694685 kubelet[2534]: I0120 00:41:32.694646 2534 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 00:41:32.695995 kubelet[2534]: I0120 00:41:32.694960 2534 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 00:41:32.914213 kubelet[2534]: E0120 00:41:32.913870 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:32.916256 kubelet[2534]: E0120 00:41:32.914545 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:32.919060 kubelet[2534]: E0120 00:41:32.917803 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:33.378556 kubelet[2534]: I0120 00:41:33.378436 2534 apiserver.go:52] "Watching apiserver" Jan 20 00:41:33.400521 kubelet[2534]: I0120 00:41:33.400425 2534 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 00:41:33.697764 kubelet[2534]: E0120 00:41:33.697280 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:33.701784 kubelet[2534]: E0120 00:41:33.699968 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:33.701784 kubelet[2534]: E0120 00:41:33.700123 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:33.761144 kubelet[2534]: I0120 00:41:33.760861 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.760821556 podStartE2EDuration="4.760821556s" podCreationTimestamp="2026-01-20 00:41:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:41:33.760091412 +0000 UTC m=+1.511524814" watchObservedRunningTime="2026-01-20 00:41:33.760821556 +0000 UTC m=+1.512254957" Jan 20 00:41:33.860170 kubelet[2534]: I0120 00:41:33.860095 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.860071281 podStartE2EDuration="4.860071281s" podCreationTimestamp="2026-01-20 00:41:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:41:33.773527064 +0000 UTC m=+1.524960486" watchObservedRunningTime="2026-01-20 00:41:33.860071281 +0000 UTC m=+1.611504683" Jan 20 00:41:33.876317 kubelet[2534]: I0120 00:41:33.876176 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.8761520000000003 podStartE2EDuration="3.876152s" podCreationTimestamp="2026-01-20 00:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:41:33.861296751 +0000 UTC m=+1.612730154" watchObservedRunningTime="2026-01-20 00:41:33.876152 +0000 UTC m=+1.627585422" Jan 20 00:41:34.711171 kubelet[2534]: E0120 00:41:34.711014 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:34.712926 kubelet[2534]: E0120 00:41:34.712727 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:35.465470 kubelet[2534]: I0120 00:41:35.465180 2534 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 00:41:35.466667 containerd[1464]: time="2026-01-20T00:41:35.466488772Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 00:41:35.467322 kubelet[2534]: I0120 00:41:35.467076 2534 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 00:41:35.720434 kubelet[2534]: E0120 00:41:35.719855 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:36.390742 systemd[1]: Created slice kubepods-besteffort-pod1bf6b22a_2a28_4441_903c_6164b17f2018.slice - libcontainer container kubepods-besteffort-pod1bf6b22a_2a28_4441_903c_6164b17f2018.slice. Jan 20 00:41:36.501422 kubelet[2534]: I0120 00:41:36.501215 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bf6b22a-2a28-4441-903c-6164b17f2018-xtables-lock\") pod \"kube-proxy-zbtk9\" (UID: \"1bf6b22a-2a28-4441-903c-6164b17f2018\") " pod="kube-system/kube-proxy-zbtk9" Jan 20 00:41:36.501422 kubelet[2534]: I0120 00:41:36.501354 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bf6b22a-2a28-4441-903c-6164b17f2018-lib-modules\") pod \"kube-proxy-zbtk9\" (UID: \"1bf6b22a-2a28-4441-903c-6164b17f2018\") " pod="kube-system/kube-proxy-zbtk9" Jan 20 00:41:36.501718 kubelet[2534]: I0120 00:41:36.501470 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1bf6b22a-2a28-4441-903c-6164b17f2018-kube-proxy\") pod \"kube-proxy-zbtk9\" (UID: \"1bf6b22a-2a28-4441-903c-6164b17f2018\") " pod="kube-system/kube-proxy-zbtk9" Jan 20 00:41:36.501718 kubelet[2534]: I0120 00:41:36.501489 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5tx\" (UniqueName: \"kubernetes.io/projected/1bf6b22a-2a28-4441-903c-6164b17f2018-kube-api-access-sd5tx\") pod \"kube-proxy-zbtk9\" (UID: \"1bf6b22a-2a28-4441-903c-6164b17f2018\") " pod="kube-system/kube-proxy-zbtk9" Jan 20 00:41:36.596170 systemd[1]: Created slice kubepods-besteffort-pod7c26d312_321f_4116_8c51_ace641ece266.slice - libcontainer container kubepods-besteffort-pod7c26d312_321f_4116_8c51_ace641ece266.slice. Jan 20 00:41:36.703082 kubelet[2534]: I0120 00:41:36.702793 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c26d312-321f-4116-8c51-ace641ece266-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-ntc7q\" (UID: \"7c26d312-321f-4116-8c51-ace641ece266\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-ntc7q" Jan 20 00:41:36.703082 kubelet[2534]: I0120 00:41:36.702874 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkb8h\" (UniqueName: \"kubernetes.io/projected/7c26d312-321f-4116-8c51-ace641ece266-kube-api-access-zkb8h\") pod \"tigera-operator-65cdcdfd6d-ntc7q\" (UID: \"7c26d312-321f-4116-8c51-ace641ece266\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-ntc7q" Jan 20 00:41:36.705679 kubelet[2534]: E0120 00:41:36.705627 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:36.706851 containerd[1464]: time="2026-01-20T00:41:36.706755034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbtk9,Uid:1bf6b22a-2a28-4441-903c-6164b17f2018,Namespace:kube-system,Attempt:0,}" Jan 20 00:41:36.760055 containerd[1464]: time="2026-01-20T00:41:36.759549932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:41:36.760055 containerd[1464]: time="2026-01-20T00:41:36.759772946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:41:36.760055 containerd[1464]: time="2026-01-20T00:41:36.759784951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:36.760055 containerd[1464]: time="2026-01-20T00:41:36.759856155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:36.786045 systemd[1]: run-containerd-runc-k8s.io-3ede6b818ff7866a2f796d7272eb468e3d7d097e75b3c34a03e612c9f0f14773-runc.16oVuV.mount: Deactivated successfully. Jan 20 00:41:36.800626 systemd[1]: Started cri-containerd-3ede6b818ff7866a2f796d7272eb468e3d7d097e75b3c34a03e612c9f0f14773.scope - libcontainer container 3ede6b818ff7866a2f796d7272eb468e3d7d097e75b3c34a03e612c9f0f14773. Jan 20 00:41:36.840020 containerd[1464]: time="2026-01-20T00:41:36.839981516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbtk9,Uid:1bf6b22a-2a28-4441-903c-6164b17f2018,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ede6b818ff7866a2f796d7272eb468e3d7d097e75b3c34a03e612c9f0f14773\"" Jan 20 00:41:36.841346 kubelet[2534]: E0120 00:41:36.841240 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:36.848938 containerd[1464]: time="2026-01-20T00:41:36.848798969Z" level=info msg="CreateContainer within sandbox \"3ede6b818ff7866a2f796d7272eb468e3d7d097e75b3c34a03e612c9f0f14773\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 00:41:36.873677 containerd[1464]: time="2026-01-20T00:41:36.873555481Z" level=info msg="CreateContainer within sandbox \"3ede6b818ff7866a2f796d7272eb468e3d7d097e75b3c34a03e612c9f0f14773\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f47a16d66e3b124e380c2c32ab8f188256550a45f8d8bab3070ac65a3205231\"" Jan 20 00:41:36.874333 containerd[1464]: time="2026-01-20T00:41:36.874283848Z" level=info msg="StartContainer for \"8f47a16d66e3b124e380c2c32ab8f188256550a45f8d8bab3070ac65a3205231\"" Jan 20 00:41:36.908220 containerd[1464]: time="2026-01-20T00:41:36.907925357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-ntc7q,Uid:7c26d312-321f-4116-8c51-ace641ece266,Namespace:tigera-operator,Attempt:0,}" Jan 20 00:41:36.925685 systemd[1]: Started cri-containerd-8f47a16d66e3b124e380c2c32ab8f188256550a45f8d8bab3070ac65a3205231.scope - libcontainer container 8f47a16d66e3b124e380c2c32ab8f188256550a45f8d8bab3070ac65a3205231. Jan 20 00:41:36.978679 containerd[1464]: time="2026-01-20T00:41:36.978120767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:41:36.979278 containerd[1464]: time="2026-01-20T00:41:36.978997906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:41:36.979278 containerd[1464]: time="2026-01-20T00:41:36.979184420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:36.979856 containerd[1464]: time="2026-01-20T00:41:36.979306054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:36.982442 containerd[1464]: time="2026-01-20T00:41:36.980534172Z" level=info msg="StartContainer for \"8f47a16d66e3b124e380c2c32ab8f188256550a45f8d8bab3070ac65a3205231\" returns successfully" Jan 20 00:41:37.023757 systemd[1]: Started cri-containerd-8d7ba2e078148793a310dd675d05b02ebe90cc455b8a652c61b68e973b6a80a8.scope - libcontainer container 8d7ba2e078148793a310dd675d05b02ebe90cc455b8a652c61b68e973b6a80a8. Jan 20 00:41:37.103105 containerd[1464]: time="2026-01-20T00:41:37.103049100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-ntc7q,Uid:7c26d312-321f-4116-8c51-ace641ece266,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8d7ba2e078148793a310dd675d05b02ebe90cc455b8a652c61b68e973b6a80a8\"" Jan 20 00:41:37.105741 containerd[1464]: time="2026-01-20T00:41:37.105716967Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 00:41:37.734111 kubelet[2534]: E0120 00:41:37.734053 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:38.758824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1545802931.mount: Deactivated successfully. Jan 20 00:41:39.599703 kubelet[2534]: E0120 00:41:39.599559 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:39.602306 kubelet[2534]: E0120 00:41:39.601937 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:39.616816 kubelet[2534]: I0120 00:41:39.616538 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zbtk9" podStartSLOduration=3.61479097 podStartE2EDuration="3.61479097s" podCreationTimestamp="2026-01-20 00:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:41:37.749761707 +0000 UTC m=+5.501195120" watchObservedRunningTime="2026-01-20 00:41:39.61479097 +0000 UTC m=+7.366224382" Jan 20 00:41:39.742112 kubelet[2534]: E0120 00:41:39.741989 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:39.742961 kubelet[2534]: E0120 00:41:39.742857 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:39.926961 containerd[1464]: time="2026-01-20T00:41:39.926762194Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:39.929023 containerd[1464]: time="2026-01-20T00:41:39.928883148Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 20 00:41:39.930678 containerd[1464]: time="2026-01-20T00:41:39.930603698Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:39.934168 containerd[1464]: time="2026-01-20T00:41:39.934087439Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:39.935341 containerd[1464]: time="2026-01-20T00:41:39.935254217Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.82933222s" Jan 20 00:41:39.935471 containerd[1464]: time="2026-01-20T00:41:39.935338407Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 00:41:39.943599 containerd[1464]: time="2026-01-20T00:41:39.943493905Z" level=info msg="CreateContainer within sandbox \"8d7ba2e078148793a310dd675d05b02ebe90cc455b8a652c61b68e973b6a80a8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 00:41:39.964212 containerd[1464]: time="2026-01-20T00:41:39.964127447Z" level=info msg="CreateContainer within sandbox \"8d7ba2e078148793a310dd675d05b02ebe90cc455b8a652c61b68e973b6a80a8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2c2e81135f63b734efda5feba928248f5c67b058bce6280018fb326b2c313e36\"" Jan 20 00:41:39.966683 containerd[1464]: time="2026-01-20T00:41:39.965098160Z" level=info msg="StartContainer for \"2c2e81135f63b734efda5feba928248f5c67b058bce6280018fb326b2c313e36\"" Jan 20 00:41:40.013552 systemd[1]: Started cri-containerd-2c2e81135f63b734efda5feba928248f5c67b058bce6280018fb326b2c313e36.scope - libcontainer container 2c2e81135f63b734efda5feba928248f5c67b058bce6280018fb326b2c313e36. Jan 20 00:41:40.051810 containerd[1464]: time="2026-01-20T00:41:40.051661898Z" level=info msg="StartContainer for \"2c2e81135f63b734efda5feba928248f5c67b058bce6280018fb326b2c313e36\" returns successfully" Jan 20 00:41:40.098234 kubelet[2534]: E0120 00:41:40.096909 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:40.750145 kubelet[2534]: E0120 00:41:40.749945 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:40.786360 kubelet[2534]: I0120 00:41:40.786271 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-ntc7q" podStartSLOduration=1.954556933 podStartE2EDuration="4.786244698s" podCreationTimestamp="2026-01-20 00:41:36 +0000 UTC" firstStartedPulling="2026-01-20 00:41:37.10496123 +0000 UTC m=+4.856394631" lastFinishedPulling="2026-01-20 00:41:39.936648993 +0000 UTC m=+7.688082396" observedRunningTime="2026-01-20 00:41:40.765469146 +0000 UTC m=+8.516902549" watchObservedRunningTime="2026-01-20 00:41:40.786244698 +0000 UTC m=+8.537678110" Jan 20 00:41:41.802490 kubelet[2534]: E0120 00:41:41.801571 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:45.481588 sudo[1642]: pam_unix(sudo:session): session closed for user root Jan 20 00:41:45.488071 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 20 00:41:45.497222 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:53130.service: Deactivated successfully. Jan 20 00:41:45.497892 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Jan 20 00:41:45.504210 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 00:41:45.506928 systemd[1]: session-7.scope: Consumed 9.838s CPU time, 161.2M memory peak, 0B memory swap peak. Jan 20 00:41:45.517088 systemd-logind[1442]: Removed session 7. Jan 20 00:41:53.673224 kubelet[2534]: I0120 00:41:53.673045 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b66743d-2e6d-4d1b-b2a4-005142a8d8ef-tigera-ca-bundle\") pod \"calico-typha-776ff4696c-sfpfp\" (UID: \"1b66743d-2e6d-4d1b-b2a4-005142a8d8ef\") " pod="calico-system/calico-typha-776ff4696c-sfpfp" Jan 20 00:41:53.673224 kubelet[2534]: I0120 00:41:53.673078 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1b66743d-2e6d-4d1b-b2a4-005142a8d8ef-typha-certs\") pod \"calico-typha-776ff4696c-sfpfp\" (UID: \"1b66743d-2e6d-4d1b-b2a4-005142a8d8ef\") " pod="calico-system/calico-typha-776ff4696c-sfpfp" Jan 20 00:41:53.673224 kubelet[2534]: I0120 00:41:53.673096 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xstf7\" (UniqueName: \"kubernetes.io/projected/1b66743d-2e6d-4d1b-b2a4-005142a8d8ef-kube-api-access-xstf7\") pod \"calico-typha-776ff4696c-sfpfp\" (UID: \"1b66743d-2e6d-4d1b-b2a4-005142a8d8ef\") " pod="calico-system/calico-typha-776ff4696c-sfpfp" Jan 20 00:41:53.685296 systemd[1]: Created slice kubepods-besteffort-pod1b66743d_2e6d_4d1b_b2a4_005142a8d8ef.slice - libcontainer container kubepods-besteffort-pod1b66743d_2e6d_4d1b_b2a4_005142a8d8ef.slice. Jan 20 00:41:53.916488 systemd[1]: Created slice kubepods-besteffort-podc4a717cb_407a_4a18_8902_4b360291c1c3.slice - libcontainer container kubepods-besteffort-podc4a717cb_407a_4a18_8902_4b360291c1c3.slice. Jan 20 00:41:53.996895 kubelet[2534]: E0120 00:41:53.996770 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:54.000215 containerd[1464]: time="2026-01-20T00:41:54.000167971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-776ff4696c-sfpfp,Uid:1b66743d-2e6d-4d1b-b2a4-005142a8d8ef,Namespace:calico-system,Attempt:0,}" Jan 20 00:41:54.055171 containerd[1464]: time="2026-01-20T00:41:54.054738394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:41:54.055660 containerd[1464]: time="2026-01-20T00:41:54.055245237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:41:54.055660 containerd[1464]: time="2026-01-20T00:41:54.055273945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:54.056484 containerd[1464]: time="2026-01-20T00:41:54.055914643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:54.076283 kubelet[2534]: I0120 00:41:54.075818 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvzx\" (UniqueName: \"kubernetes.io/projected/c4a717cb-407a-4a18-8902-4b360291c1c3-kube-api-access-qpvzx\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076283 kubelet[2534]: I0120 00:41:54.075883 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4a717cb-407a-4a18-8902-4b360291c1c3-lib-modules\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076283 kubelet[2534]: I0120 00:41:54.075910 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a717cb-407a-4a18-8902-4b360291c1c3-tigera-ca-bundle\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076283 kubelet[2534]: I0120 00:41:54.075928 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4a717cb-407a-4a18-8902-4b360291c1c3-cni-bin-dir\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076283 kubelet[2534]: I0120 00:41:54.075949 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4a717cb-407a-4a18-8902-4b360291c1c3-cni-log-dir\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076804 kubelet[2534]: I0120 00:41:54.075964 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4a717cb-407a-4a18-8902-4b360291c1c3-var-run-calico\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076804 kubelet[2534]: I0120 00:41:54.075979 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4a717cb-407a-4a18-8902-4b360291c1c3-xtables-lock\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076804 kubelet[2534]: I0120 00:41:54.075997 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4a717cb-407a-4a18-8902-4b360291c1c3-cni-net-dir\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076804 kubelet[2534]: I0120 00:41:54.076011 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4a717cb-407a-4a18-8902-4b360291c1c3-node-certs\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076804 kubelet[2534]: I0120 00:41:54.076023 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4a717cb-407a-4a18-8902-4b360291c1c3-var-lib-calico\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076930 kubelet[2534]: I0120 00:41:54.076037 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4a717cb-407a-4a18-8902-4b360291c1c3-flexvol-driver-host\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.076930 kubelet[2534]: I0120 00:41:54.076124 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4a717cb-407a-4a18-8902-4b360291c1c3-policysync\") pod \"calico-node-2bh5j\" (UID: \"c4a717cb-407a-4a18-8902-4b360291c1c3\") " pod="calico-system/calico-node-2bh5j" Jan 20 00:41:54.106846 systemd[1]: Started cri-containerd-513efe372ce91e21a5d1565048e64e43b612d565d53997b077be7f59de1fdae2.scope - libcontainer container 513efe372ce91e21a5d1565048e64e43b612d565d53997b077be7f59de1fdae2. Jan 20 00:41:54.126645 kubelet[2534]: E0120 00:41:54.126481 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:41:54.177461 kubelet[2534]: I0120 00:41:54.177198 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/28fdd63b-baae-4d6e-b08c-1195d95658e8-registration-dir\") pod \"csi-node-driver-q7994\" (UID: \"28fdd63b-baae-4d6e-b08c-1195d95658e8\") " pod="calico-system/csi-node-driver-q7994" Jan 20 00:41:54.177620 kubelet[2534]: I0120 00:41:54.177475 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fk9\" (UniqueName: \"kubernetes.io/projected/28fdd63b-baae-4d6e-b08c-1195d95658e8-kube-api-access-l9fk9\") pod \"csi-node-driver-q7994\" (UID: \"28fdd63b-baae-4d6e-b08c-1195d95658e8\") " pod="calico-system/csi-node-driver-q7994" Jan 20 00:41:54.177620 kubelet[2534]: I0120 00:41:54.177577 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/28fdd63b-baae-4d6e-b08c-1195d95658e8-socket-dir\") pod \"csi-node-driver-q7994\" (UID: \"28fdd63b-baae-4d6e-b08c-1195d95658e8\") " pod="calico-system/csi-node-driver-q7994" Jan 20 00:41:54.177620 kubelet[2534]: I0120 00:41:54.177617 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/28fdd63b-baae-4d6e-b08c-1195d95658e8-varrun\") pod \"csi-node-driver-q7994\" (UID: \"28fdd63b-baae-4d6e-b08c-1195d95658e8\") " pod="calico-system/csi-node-driver-q7994" Jan 20 00:41:54.177739 kubelet[2534]: I0120 00:41:54.177695 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/28fdd63b-baae-4d6e-b08c-1195d95658e8-kubelet-dir\") pod \"csi-node-driver-q7994\" (UID: \"28fdd63b-baae-4d6e-b08c-1195d95658e8\") " pod="calico-system/csi-node-driver-q7994" Jan 20 00:41:54.189438 kubelet[2534]: E0120 00:41:54.189229 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.189438 kubelet[2534]: W0120 00:41:54.189287 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.189438 kubelet[2534]: E0120 00:41:54.189315 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.190032 kubelet[2534]: E0120 00:41:54.189933 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.190032 kubelet[2534]: W0120 00:41:54.189952 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.190032 kubelet[2534]: E0120 00:41:54.189972 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.195298 kubelet[2534]: E0120 00:41:54.195207 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.195298 kubelet[2534]: W0120 00:41:54.195229 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.195298 kubelet[2534]: E0120 00:41:54.195251 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.195903 kubelet[2534]: E0120 00:41:54.195844 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.195903 kubelet[2534]: W0120 00:41:54.195862 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.195903 kubelet[2534]: E0120 00:41:54.195880 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.220546 containerd[1464]: time="2026-01-20T00:41:54.220251727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-776ff4696c-sfpfp,Uid:1b66743d-2e6d-4d1b-b2a4-005142a8d8ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"513efe372ce91e21a5d1565048e64e43b612d565d53997b077be7f59de1fdae2\"" Jan 20 00:41:54.224269 kubelet[2534]: E0120 00:41:54.223637 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:54.226034 containerd[1464]: time="2026-01-20T00:41:54.226006204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 00:41:54.227074 kubelet[2534]: E0120 00:41:54.227002 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:54.227697 containerd[1464]: time="2026-01-20T00:41:54.227668995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bh5j,Uid:c4a717cb-407a-4a18-8902-4b360291c1c3,Namespace:calico-system,Attempt:0,}" Jan 20 00:41:54.281495 kubelet[2534]: E0120 00:41:54.280583 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.281495 kubelet[2534]: W0120 00:41:54.280629 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.281495 kubelet[2534]: E0120 00:41:54.280652 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.281495 kubelet[2534]: E0120 00:41:54.281204 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.281495 kubelet[2534]: W0120 00:41:54.281217 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.281495 kubelet[2534]: E0120 00:41:54.281234 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.283002 kubelet[2534]: E0120 00:41:54.282860 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.283002 kubelet[2534]: W0120 00:41:54.282875 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.283002 kubelet[2534]: E0120 00:41:54.282892 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.283718 kubelet[2534]: E0120 00:41:54.283654 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.283718 kubelet[2534]: W0120 00:41:54.283664 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.283718 kubelet[2534]: E0120 00:41:54.283674 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.284766 kubelet[2534]: E0120 00:41:54.284511 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.284766 kubelet[2534]: W0120 00:41:54.284528 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.284766 kubelet[2534]: E0120 00:41:54.284545 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.286160 kubelet[2534]: E0120 00:41:54.286105 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.286241 kubelet[2534]: W0120 00:41:54.286163 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.286241 kubelet[2534]: E0120 00:41:54.286183 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.286846 kubelet[2534]: E0120 00:41:54.286766 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.286846 kubelet[2534]: W0120 00:41:54.286810 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.286846 kubelet[2534]: E0120 00:41:54.286822 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.287515 kubelet[2534]: E0120 00:41:54.287320 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.287515 kubelet[2534]: W0120 00:41:54.287478 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.287515 kubelet[2534]: E0120 00:41:54.287498 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.288569 kubelet[2534]: E0120 00:41:54.288238 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.288569 kubelet[2534]: W0120 00:41:54.288249 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.288569 kubelet[2534]: E0120 00:41:54.288259 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.289222 kubelet[2534]: E0120 00:41:54.288616 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.289222 kubelet[2534]: W0120 00:41:54.288625 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.289222 kubelet[2534]: E0120 00:41:54.288634 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.289222 kubelet[2534]: E0120 00:41:54.289134 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.289222 kubelet[2534]: W0120 00:41:54.289144 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.289222 kubelet[2534]: E0120 00:41:54.289154 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.289642 kubelet[2534]: E0120 00:41:54.289600 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.289642 kubelet[2534]: W0120 00:41:54.289633 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.289642 kubelet[2534]: E0120 00:41:54.289644 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.290052 kubelet[2534]: E0120 00:41:54.289994 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.290052 kubelet[2534]: W0120 00:41:54.290031 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.290052 kubelet[2534]: E0120 00:41:54.290042 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.290753 kubelet[2534]: E0120 00:41:54.290688 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.290753 kubelet[2534]: W0120 00:41:54.290728 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.290753 kubelet[2534]: E0120 00:41:54.290738 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.291277 kubelet[2534]: E0120 00:41:54.291198 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.291277 kubelet[2534]: W0120 00:41:54.291252 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.291277 kubelet[2534]: E0120 00:41:54.291269 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.291888 kubelet[2534]: E0120 00:41:54.291795 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.291888 kubelet[2534]: W0120 00:41:54.291851 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.291888 kubelet[2534]: E0120 00:41:54.291870 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.292625 kubelet[2534]: E0120 00:41:54.292530 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.292625 kubelet[2534]: W0120 00:41:54.292568 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.292625 kubelet[2534]: E0120 00:41:54.292579 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.293390 kubelet[2534]: E0120 00:41:54.293168 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.293390 kubelet[2534]: W0120 00:41:54.293222 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.293390 kubelet[2534]: E0120 00:41:54.293241 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.294175 kubelet[2534]: E0120 00:41:54.293891 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.294175 kubelet[2534]: W0120 00:41:54.293931 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.294175 kubelet[2534]: E0120 00:41:54.293943 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.294684 kubelet[2534]: E0120 00:41:54.294577 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.294684 kubelet[2534]: W0120 00:41:54.294615 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.294684 kubelet[2534]: E0120 00:41:54.294626 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.295114 kubelet[2534]: E0120 00:41:54.295098 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.295114 kubelet[2534]: W0120 00:41:54.295110 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.295256 kubelet[2534]: E0120 00:41:54.295120 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.296062 kubelet[2534]: E0120 00:41:54.296005 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.296062 kubelet[2534]: W0120 00:41:54.296051 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.296133 kubelet[2534]: E0120 00:41:54.296067 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.297614 kubelet[2534]: E0120 00:41:54.296681 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.297614 kubelet[2534]: W0120 00:41:54.296698 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.297614 kubelet[2534]: E0120 00:41:54.296713 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.297614 kubelet[2534]: E0120 00:41:54.297577 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.297614 kubelet[2534]: W0120 00:41:54.297593 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.297614 kubelet[2534]: E0120 00:41:54.297610 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.298194 kubelet[2534]: E0120 00:41:54.298144 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.298194 kubelet[2534]: W0120 00:41:54.298160 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.298194 kubelet[2534]: E0120 00:41:54.298173 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.301200 containerd[1464]: time="2026-01-20T00:41:54.298017726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:41:54.301200 containerd[1464]: time="2026-01-20T00:41:54.299876387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:41:54.301200 containerd[1464]: time="2026-01-20T00:41:54.300116366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:54.301450 containerd[1464]: time="2026-01-20T00:41:54.300884874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:41:54.314641 kubelet[2534]: E0120 00:41:54.314516 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 00:41:54.314641 kubelet[2534]: W0120 00:41:54.314540 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 00:41:54.314641 kubelet[2534]: E0120 00:41:54.314561 2534 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 00:41:54.327722 systemd[1]: Started cri-containerd-f8a51fb0f21e6d26bdc5cd0ee2973ebb97bd694f81a513c0de9cb1dec46a4c91.scope - libcontainer container f8a51fb0f21e6d26bdc5cd0ee2973ebb97bd694f81a513c0de9cb1dec46a4c91. Jan 20 00:41:54.383664 containerd[1464]: time="2026-01-20T00:41:54.383584995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bh5j,Uid:c4a717cb-407a-4a18-8902-4b360291c1c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8a51fb0f21e6d26bdc5cd0ee2973ebb97bd694f81a513c0de9cb1dec46a4c91\"" Jan 20 00:41:54.386210 kubelet[2534]: E0120 00:41:54.385581 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:54.819986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603089549.mount: Deactivated successfully. Jan 20 00:41:56.490126 kubelet[2534]: E0120 00:41:56.489988 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:41:56.768186 containerd[1464]: time="2026-01-20T00:41:56.767948669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:56.769526 containerd[1464]: time="2026-01-20T00:41:56.769419559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 20 00:41:56.770694 containerd[1464]: time="2026-01-20T00:41:56.770641071Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:56.774562 containerd[1464]: time="2026-01-20T00:41:56.774497631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:56.775321 containerd[1464]: time="2026-01-20T00:41:56.775197983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.548946479s" Jan 20 00:41:56.775321 containerd[1464]: time="2026-01-20T00:41:56.775263105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 00:41:56.779458 containerd[1464]: time="2026-01-20T00:41:56.777546566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 00:41:56.797476 containerd[1464]: time="2026-01-20T00:41:56.797289523Z" level=info msg="CreateContainer within sandbox \"513efe372ce91e21a5d1565048e64e43b612d565d53997b077be7f59de1fdae2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 00:41:56.824749 containerd[1464]: time="2026-01-20T00:41:56.824665457Z" level=info msg="CreateContainer within sandbox \"513efe372ce91e21a5d1565048e64e43b612d565d53997b077be7f59de1fdae2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fc79b8c17faf1a78d8ec73aa81c22470a2f99b999b60d077e080c301d713ad3c\"" Jan 20 00:41:56.826355 containerd[1464]: time="2026-01-20T00:41:56.826214345Z" level=info msg="StartContainer for \"fc79b8c17faf1a78d8ec73aa81c22470a2f99b999b60d077e080c301d713ad3c\"" Jan 20 00:41:56.886602 systemd[1]: Started cri-containerd-fc79b8c17faf1a78d8ec73aa81c22470a2f99b999b60d077e080c301d713ad3c.scope - libcontainer container fc79b8c17faf1a78d8ec73aa81c22470a2f99b999b60d077e080c301d713ad3c. Jan 20 00:41:56.963457 containerd[1464]: time="2026-01-20T00:41:56.963274635Z" level=info msg="StartContainer for \"fc79b8c17faf1a78d8ec73aa81c22470a2f99b999b60d077e080c301d713ad3c\" returns successfully" Jan 20 00:41:57.449212 containerd[1464]: time="2026-01-20T00:41:57.449089550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:57.450767 containerd[1464]: time="2026-01-20T00:41:57.450653334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 20 00:41:57.452606 containerd[1464]: time="2026-01-20T00:41:57.452551163Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:57.455759 containerd[1464]: time="2026-01-20T00:41:57.455692178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:41:57.457135 containerd[1464]: time="2026-01-20T00:41:57.457076892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 679.48807ms" Jan 20 00:41:57.457220 containerd[1464]: time="2026-01-20T00:41:57.457149480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 00:41:57.463949 containerd[1464]: time="2026-01-20T00:41:57.463773263Z" level=info msg="CreateContainer within sandbox \"f8a51fb0f21e6d26bdc5cd0ee2973ebb97bd694f81a513c0de9cb1dec46a4c91\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 00:41:57.485525 containerd[1464]: time="2026-01-20T00:41:57.485450587Z" level=info msg="CreateContainer within sandbox \"f8a51fb0f21e6d26bdc5cd0ee2973ebb97bd694f81a513c0de9cb1dec46a4c91\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"acaf8f9417ec6128cf54c8da0b60488d198e960762bd58f78329724140563008\"" Jan 20 00:41:57.486718 containerd[1464]: time="2026-01-20T00:41:57.486595279Z" level=info msg="StartContainer for \"acaf8f9417ec6128cf54c8da0b60488d198e960762bd58f78329724140563008\"" Jan 20 00:41:57.548712 systemd[1]: Started cri-containerd-acaf8f9417ec6128cf54c8da0b60488d198e960762bd58f78329724140563008.scope - libcontainer container acaf8f9417ec6128cf54c8da0b60488d198e960762bd58f78329724140563008. Jan 20 00:41:57.606190 containerd[1464]: time="2026-01-20T00:41:57.606025438Z" level=info msg="StartContainer for \"acaf8f9417ec6128cf54c8da0b60488d198e960762bd58f78329724140563008\" returns successfully" Jan 20 00:41:57.617757 kubelet[2534]: E0120 00:41:57.617608 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:57.624767 kubelet[2534]: E0120 00:41:57.624740 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:57.632360 systemd[1]: cri-containerd-acaf8f9417ec6128cf54c8da0b60488d198e960762bd58f78329724140563008.scope: Deactivated successfully. Jan 20 00:41:57.799253 containerd[1464]: time="2026-01-20T00:41:57.798984530Z" level=info msg="shim disconnected" id=acaf8f9417ec6128cf54c8da0b60488d198e960762bd58f78329724140563008 namespace=k8s.io Jan 20 00:41:57.799253 containerd[1464]: time="2026-01-20T00:41:57.799154595Z" level=warning msg="cleaning up after shim disconnected" id=acaf8f9417ec6128cf54c8da0b60488d198e960762bd58f78329724140563008 namespace=k8s.io Jan 20 00:41:57.799253 containerd[1464]: time="2026-01-20T00:41:57.799165115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:41:58.493853 kubelet[2534]: E0120 00:41:58.493721 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:41:58.641341 kubelet[2534]: I0120 00:41:58.641295 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:41:58.643447 kubelet[2534]: E0120 00:41:58.643015 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:58.643447 kubelet[2534]: E0120 00:41:58.643140 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:41:58.645504 containerd[1464]: time="2026-01-20T00:41:58.645461359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 00:41:58.672742 kubelet[2534]: I0120 00:41:58.672508 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-776ff4696c-sfpfp" podStartSLOduration=3.121140352 podStartE2EDuration="5.672457323s" podCreationTimestamp="2026-01-20 00:41:53 +0000 UTC" firstStartedPulling="2026-01-20 00:41:54.225526977 +0000 UTC m=+21.976960379" lastFinishedPulling="2026-01-20 00:41:56.776843947 +0000 UTC m=+24.528277350" observedRunningTime="2026-01-20 00:41:57.665886556 +0000 UTC m=+25.417319988" watchObservedRunningTime="2026-01-20 00:41:58.672457323 +0000 UTC m=+26.423890735" Jan 20 00:42:00.488960 kubelet[2534]: E0120 00:42:00.488894 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:42:00.906289 kubelet[2534]: I0120 00:42:00.906181 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:42:00.907902 kubelet[2534]: E0120 00:42:00.907770 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:01.657024 kubelet[2534]: E0120 00:42:01.656733 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:01.769719 containerd[1464]: time="2026-01-20T00:42:01.769548616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:42:01.771512 containerd[1464]: time="2026-01-20T00:42:01.771324791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 00:42:01.773098 containerd[1464]: time="2026-01-20T00:42:01.772997572Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:42:01.776818 containerd[1464]: time="2026-01-20T00:42:01.776717458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:42:01.778720 containerd[1464]: time="2026-01-20T00:42:01.778654060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.133142359s" Jan 20 00:42:01.778720 containerd[1464]: time="2026-01-20T00:42:01.778703251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 00:42:01.786928 containerd[1464]: time="2026-01-20T00:42:01.786805700Z" level=info msg="CreateContainer within sandbox \"f8a51fb0f21e6d26bdc5cd0ee2973ebb97bd694f81a513c0de9cb1dec46a4c91\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 00:42:01.825600 containerd[1464]: time="2026-01-20T00:42:01.825485334Z" level=info msg="CreateContainer within sandbox \"f8a51fb0f21e6d26bdc5cd0ee2973ebb97bd694f81a513c0de9cb1dec46a4c91\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47\"" Jan 20 00:42:01.827692 containerd[1464]: time="2026-01-20T00:42:01.827479700Z" level=info msg="StartContainer for \"13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47\"" Jan 20 00:42:01.917734 systemd[1]: Started cri-containerd-13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47.scope - libcontainer container 13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47. Jan 20 00:42:01.998745 containerd[1464]: time="2026-01-20T00:42:01.998268033Z" level=info msg="StartContainer for \"13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47\" returns successfully" Jan 20 00:42:02.494516 kubelet[2534]: E0120 00:42:02.494331 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:42:02.667080 kubelet[2534]: E0120 00:42:02.666768 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:03.486207 systemd[1]: cri-containerd-13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47.scope: Deactivated successfully. Jan 20 00:42:03.487091 systemd[1]: cri-containerd-13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47.scope: Consumed 1.769s CPU time. Jan 20 00:42:03.500609 kubelet[2534]: I0120 00:42:03.500488 2534 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 20 00:42:03.532209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47-rootfs.mount: Deactivated successfully. Jan 20 00:42:03.653946 containerd[1464]: time="2026-01-20T00:42:03.653662603Z" level=info msg="shim disconnected" id=13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47 namespace=k8s.io Jan 20 00:42:03.653946 containerd[1464]: time="2026-01-20T00:42:03.653760260Z" level=warning msg="cleaning up after shim disconnected" id=13b37fb10c59db43bcadd9a3ccd3f5c4b7b5bc6f4fcbd02e4d508442f6d22b47 namespace=k8s.io Jan 20 00:42:03.653946 containerd[1464]: time="2026-01-20T00:42:03.653774619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 00:42:03.659644 systemd[1]: Created slice kubepods-burstable-poda912c1f3_a959_479f_b8c4_402f78743287.slice - libcontainer container kubepods-burstable-poda912c1f3_a959_479f_b8c4_402f78743287.slice. Jan 20 00:42:03.682277 kubelet[2534]: E0120 00:42:03.679003 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:03.692461 kubelet[2534]: I0120 00:42:03.684491 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65mcc\" (UniqueName: \"kubernetes.io/projected/183d032b-800c-4fa4-8adf-a6b6125809b8-kube-api-access-65mcc\") pod \"calico-apiserver-5fcdf9c988-cp8b5\" (UID: \"183d032b-800c-4fa4-8adf-a6b6125809b8\") " pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" Jan 20 00:42:03.692461 kubelet[2534]: I0120 00:42:03.686988 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a912c1f3-a959-479f-b8c4-402f78743287-config-volume\") pod \"coredns-66bc5c9577-wssqv\" (UID: \"a912c1f3-a959-479f-b8c4-402f78743287\") " pod="kube-system/coredns-66bc5c9577-wssqv" Jan 20 00:42:03.692461 kubelet[2534]: I0120 00:42:03.687024 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gkqp\" (UniqueName: \"kubernetes.io/projected/a912c1f3-a959-479f-b8c4-402f78743287-kube-api-access-8gkqp\") pod \"coredns-66bc5c9577-wssqv\" (UID: \"a912c1f3-a959-479f-b8c4-402f78743287\") " pod="kube-system/coredns-66bc5c9577-wssqv" Jan 20 00:42:03.692461 kubelet[2534]: I0120 00:42:03.687053 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhgm\" (UniqueName: \"kubernetes.io/projected/94df424b-d767-451c-98d9-6b195890f32a-kube-api-access-hwhgm\") pod \"calico-apiserver-5fcdf9c988-97g72\" (UID: \"94df424b-d767-451c-98d9-6b195890f32a\") " pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" Jan 20 00:42:03.692461 kubelet[2534]: I0120 00:42:03.687088 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94df424b-d767-451c-98d9-6b195890f32a-calico-apiserver-certs\") pod \"calico-apiserver-5fcdf9c988-97g72\" (UID: \"94df424b-d767-451c-98d9-6b195890f32a\") " pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" Jan 20 00:42:03.691482 systemd[1]: Created slice kubepods-besteffort-pod94df424b_d767_451c_98d9_6b195890f32a.slice - libcontainer container kubepods-besteffort-pod94df424b_d767_451c_98d9_6b195890f32a.slice. Jan 20 00:42:03.693007 kubelet[2534]: I0120 00:42:03.687130 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/183d032b-800c-4fa4-8adf-a6b6125809b8-calico-apiserver-certs\") pod \"calico-apiserver-5fcdf9c988-cp8b5\" (UID: \"183d032b-800c-4fa4-8adf-a6b6125809b8\") " pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" Jan 20 00:42:03.718724 systemd[1]: Created slice kubepods-besteffort-pod183d032b_800c_4fa4_8adf_a6b6125809b8.slice - libcontainer container kubepods-besteffort-pod183d032b_800c_4fa4_8adf_a6b6125809b8.slice. Jan 20 00:42:03.730236 systemd[1]: Created slice kubepods-burstable-pod601adc65_0461_4784_a3c3_b551c4a085b3.slice - libcontainer container kubepods-burstable-pod601adc65_0461_4784_a3c3_b551c4a085b3.slice. Jan 20 00:42:03.751038 systemd[1]: Created slice kubepods-besteffort-poda7b5c492_f30c_4416_b600_12afd3fc29bc.slice - libcontainer container kubepods-besteffort-poda7b5c492_f30c_4416_b600_12afd3fc29bc.slice. Jan 20 00:42:03.764323 systemd[1]: Created slice kubepods-besteffort-pod16f84c69_9c06_4195_a968_2c29cf809ca6.slice - libcontainer container kubepods-besteffort-pod16f84c69_9c06_4195_a968_2c29cf809ca6.slice. Jan 20 00:42:03.778044 systemd[1]: Created slice kubepods-besteffort-pod88df5820_4d41_412b_bac0_45d81ef0f210.slice - libcontainer container kubepods-besteffort-pod88df5820_4d41_412b_bac0_45d81ef0f210.slice. Jan 20 00:42:03.788793 kubelet[2534]: I0120 00:42:03.788746 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f84c69-9c06-4195-a968-2c29cf809ca6-config\") pod \"goldmane-7c778bb748-c8bcc\" (UID: \"16f84c69-9c06-4195-a968-2c29cf809ca6\") " pod="calico-system/goldmane-7c778bb748-c8bcc" Jan 20 00:42:03.789061 kubelet[2534]: I0120 00:42:03.788802 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/16f84c69-9c06-4195-a968-2c29cf809ca6-goldmane-key-pair\") pod \"goldmane-7c778bb748-c8bcc\" (UID: \"16f84c69-9c06-4195-a968-2c29cf809ca6\") " pod="calico-system/goldmane-7c778bb748-c8bcc" Jan 20 00:42:03.789061 kubelet[2534]: I0120 00:42:03.788835 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/601adc65-0461-4784-a3c3-b551c4a085b3-config-volume\") pod \"coredns-66bc5c9577-c474g\" (UID: \"601adc65-0461-4784-a3c3-b551c4a085b3\") " pod="kube-system/coredns-66bc5c9577-c474g" Jan 20 00:42:03.789061 kubelet[2534]: I0120 00:42:03.788964 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtm9p\" (UniqueName: \"kubernetes.io/projected/a7b5c492-f30c-4416-b600-12afd3fc29bc-kube-api-access-wtm9p\") pod \"calico-kube-controllers-6b77768bf7-8nrk9\" (UID: \"a7b5c492-f30c-4416-b600-12afd3fc29bc\") " pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" Jan 20 00:42:03.789061 kubelet[2534]: I0120 00:42:03.788995 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16f84c69-9c06-4195-a968-2c29cf809ca6-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-c8bcc\" (UID: \"16f84c69-9c06-4195-a968-2c29cf809ca6\") " pod="calico-system/goldmane-7c778bb748-c8bcc" Jan 20 00:42:03.789061 kubelet[2534]: I0120 00:42:03.789024 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bb89\" (UniqueName: \"kubernetes.io/projected/16f84c69-9c06-4195-a968-2c29cf809ca6-kube-api-access-5bb89\") pod \"goldmane-7c778bb748-c8bcc\" (UID: \"16f84c69-9c06-4195-a968-2c29cf809ca6\") " pod="calico-system/goldmane-7c778bb748-c8bcc" Jan 20 00:42:03.789266 kubelet[2534]: I0120 00:42:03.789113 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88df5820-4d41-412b-bac0-45d81ef0f210-whisker-backend-key-pair\") pod \"whisker-59d4dcb77c-t6w5v\" (UID: \"88df5820-4d41-412b-bac0-45d81ef0f210\") " pod="calico-system/whisker-59d4dcb77c-t6w5v" Jan 20 00:42:03.789266 kubelet[2534]: I0120 00:42:03.789161 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn5qx\" (UniqueName: \"kubernetes.io/projected/88df5820-4d41-412b-bac0-45d81ef0f210-kube-api-access-dn5qx\") pod \"whisker-59d4dcb77c-t6w5v\" (UID: \"88df5820-4d41-412b-bac0-45d81ef0f210\") " pod="calico-system/whisker-59d4dcb77c-t6w5v" Jan 20 00:42:03.789266 kubelet[2534]: I0120 00:42:03.789205 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7b5c492-f30c-4416-b600-12afd3fc29bc-tigera-ca-bundle\") pod \"calico-kube-controllers-6b77768bf7-8nrk9\" (UID: \"a7b5c492-f30c-4416-b600-12afd3fc29bc\") " pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" Jan 20 00:42:03.789266 kubelet[2534]: I0120 00:42:03.789230 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbp5\" (UniqueName: \"kubernetes.io/projected/601adc65-0461-4784-a3c3-b551c4a085b3-kube-api-access-xdbp5\") pod \"coredns-66bc5c9577-c474g\" (UID: \"601adc65-0461-4784-a3c3-b551c4a085b3\") " pod="kube-system/coredns-66bc5c9577-c474g" Jan 20 00:42:03.789266 kubelet[2534]: I0120 00:42:03.789251 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88df5820-4d41-412b-bac0-45d81ef0f210-whisker-ca-bundle\") pod \"whisker-59d4dcb77c-t6w5v\" (UID: \"88df5820-4d41-412b-bac0-45d81ef0f210\") " pod="calico-system/whisker-59d4dcb77c-t6w5v" Jan 20 00:42:03.972027 kubelet[2534]: E0120 00:42:03.971961 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:03.973043 containerd[1464]: time="2026-01-20T00:42:03.972874437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wssqv,Uid:a912c1f3-a959-479f-b8c4-402f78743287,Namespace:kube-system,Attempt:0,}" Jan 20 00:42:04.009860 containerd[1464]: time="2026-01-20T00:42:04.009559293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdf9c988-97g72,Uid:94df424b-d767-451c-98d9-6b195890f32a,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:42:04.030000 containerd[1464]: time="2026-01-20T00:42:04.029746585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdf9c988-cp8b5,Uid:183d032b-800c-4fa4-8adf-a6b6125809b8,Namespace:calico-apiserver,Attempt:0,}" Jan 20 00:42:04.048780 kubelet[2534]: E0120 00:42:04.048648 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:04.050297 containerd[1464]: time="2026-01-20T00:42:04.050150911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c474g,Uid:601adc65-0461-4784-a3c3-b551c4a085b3,Namespace:kube-system,Attempt:0,}" Jan 20 00:42:04.062871 containerd[1464]: time="2026-01-20T00:42:04.062567733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b77768bf7-8nrk9,Uid:a7b5c492-f30c-4416-b600-12afd3fc29bc,Namespace:calico-system,Attempt:0,}" Jan 20 00:42:04.079148 containerd[1464]: time="2026-01-20T00:42:04.078704646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c8bcc,Uid:16f84c69-9c06-4195-a968-2c29cf809ca6,Namespace:calico-system,Attempt:0,}" Jan 20 00:42:04.088804 containerd[1464]: time="2026-01-20T00:42:04.088655634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59d4dcb77c-t6w5v,Uid:88df5820-4d41-412b-bac0-45d81ef0f210,Namespace:calico-system,Attempt:0,}" Jan 20 00:42:04.235073 containerd[1464]: time="2026-01-20T00:42:04.234712592Z" level=error msg="Failed to destroy network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.238606 containerd[1464]: time="2026-01-20T00:42:04.238460263Z" level=error msg="encountered an error cleaning up failed sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.238606 containerd[1464]: time="2026-01-20T00:42:04.238544212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wssqv,Uid:a912c1f3-a959-479f-b8c4-402f78743287,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.262794 containerd[1464]: time="2026-01-20T00:42:04.262344670Z" level=error msg="Failed to destroy network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.267130 containerd[1464]: time="2026-01-20T00:42:04.266895509Z" level=error msg="encountered an error cleaning up failed sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.267130 containerd[1464]: time="2026-01-20T00:42:04.267047074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdf9c988-cp8b5,Uid:183d032b-800c-4fa4-8adf-a6b6125809b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.273274 kubelet[2534]: E0120 00:42:04.273185 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.273498 kubelet[2534]: E0120 00:42:04.273293 2534 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" Jan 20 00:42:04.273498 kubelet[2534]: E0120 00:42:04.273323 2534 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" Jan 20 00:42:04.273736 kubelet[2534]: E0120 00:42:04.273482 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fcdf9c988-cp8b5_calico-apiserver(183d032b-800c-4fa4-8adf-a6b6125809b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fcdf9c988-cp8b5_calico-apiserver(183d032b-800c-4fa4-8adf-a6b6125809b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" podUID="183d032b-800c-4fa4-8adf-a6b6125809b8" Jan 20 00:42:04.274735 kubelet[2534]: E0120 00:42:04.274553 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.274735 kubelet[2534]: E0120 00:42:04.274606 2534 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wssqv" Jan 20 00:42:04.274735 kubelet[2534]: E0120 00:42:04.274632 2534 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wssqv" Jan 20 00:42:04.274862 kubelet[2534]: E0120 00:42:04.274684 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wssqv_kube-system(a912c1f3-a959-479f-b8c4-402f78743287)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wssqv_kube-system(a912c1f3-a959-479f-b8c4-402f78743287)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wssqv" podUID="a912c1f3-a959-479f-b8c4-402f78743287" Jan 20 00:42:04.292573 containerd[1464]: time="2026-01-20T00:42:04.292513636Z" level=error msg="Failed to destroy network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.293716 containerd[1464]: time="2026-01-20T00:42:04.293674292Z" level=error msg="encountered an error cleaning up failed sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.293913 containerd[1464]: time="2026-01-20T00:42:04.293878844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdf9c988-97g72,Uid:94df424b-d767-451c-98d9-6b195890f32a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.295483 kubelet[2534]: E0120 00:42:04.294725 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.295483 kubelet[2534]: E0120 00:42:04.294800 2534 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" Jan 20 00:42:04.295483 kubelet[2534]: E0120 00:42:04.294832 2534 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" Jan 20 00:42:04.295633 kubelet[2534]: E0120 00:42:04.294995 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fcdf9c988-97g72_calico-apiserver(94df424b-d767-451c-98d9-6b195890f32a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fcdf9c988-97g72_calico-apiserver(94df424b-d767-451c-98d9-6b195890f32a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:42:04.324653 containerd[1464]: time="2026-01-20T00:42:04.324462770Z" level=error msg="Failed to destroy network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.327155 containerd[1464]: time="2026-01-20T00:42:04.326796687Z" level=error msg="encountered an error cleaning up failed sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.327668 containerd[1464]: time="2026-01-20T00:42:04.327528869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c474g,Uid:601adc65-0461-4784-a3c3-b551c4a085b3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.330030 kubelet[2534]: E0120 00:42:04.328583 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.330030 kubelet[2534]: E0120 00:42:04.328662 2534 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-c474g" Jan 20 00:42:04.330030 kubelet[2534]: E0120 00:42:04.328691 2534 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-c474g" Jan 20 00:42:04.330198 kubelet[2534]: E0120 00:42:04.328773 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-c474g_kube-system(601adc65-0461-4784-a3c3-b551c4a085b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-c474g_kube-system(601adc65-0461-4784-a3c3-b551c4a085b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-c474g" podUID="601adc65-0461-4784-a3c3-b551c4a085b3" Jan 20 00:42:04.349711 containerd[1464]: time="2026-01-20T00:42:04.349355943Z" level=error msg="Failed to destroy network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.352231 containerd[1464]: time="2026-01-20T00:42:04.352188753Z" level=error msg="encountered an error cleaning up failed sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.352498 containerd[1464]: time="2026-01-20T00:42:04.352460772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59d4dcb77c-t6w5v,Uid:88df5820-4d41-412b-bac0-45d81ef0f210,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.354684 kubelet[2534]: E0120 00:42:04.353051 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.354684 kubelet[2534]: E0120 00:42:04.353137 2534 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59d4dcb77c-t6w5v" Jan 20 00:42:04.354684 kubelet[2534]: E0120 00:42:04.353173 2534 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59d4dcb77c-t6w5v" Jan 20 00:42:04.354844 kubelet[2534]: E0120 00:42:04.353247 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59d4dcb77c-t6w5v_calico-system(88df5820-4d41-412b-bac0-45d81ef0f210)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59d4dcb77c-t6w5v_calico-system(88df5820-4d41-412b-bac0-45d81ef0f210)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59d4dcb77c-t6w5v" podUID="88df5820-4d41-412b-bac0-45d81ef0f210" Jan 20 00:42:04.365602 containerd[1464]: time="2026-01-20T00:42:04.365533249Z" level=error msg="Failed to destroy network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.366242 containerd[1464]: time="2026-01-20T00:42:04.366209550Z" level=error msg="encountered an error cleaning up failed sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.366296 containerd[1464]: time="2026-01-20T00:42:04.366278720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b77768bf7-8nrk9,Uid:a7b5c492-f30c-4416-b600-12afd3fc29bc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.366732 kubelet[2534]: E0120 00:42:04.366665 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.366801 kubelet[2534]: E0120 00:42:04.366744 2534 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" Jan 20 00:42:04.366801 kubelet[2534]: E0120 00:42:04.366765 2534 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" Jan 20 00:42:04.366896 kubelet[2534]: E0120 00:42:04.366870 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b77768bf7-8nrk9_calico-system(a7b5c492-f30c-4416-b600-12afd3fc29bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b77768bf7-8nrk9_calico-system(a7b5c492-f30c-4416-b600-12afd3fc29bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" podUID="a7b5c492-f30c-4416-b600-12afd3fc29bc" Jan 20 00:42:04.382122 containerd[1464]: time="2026-01-20T00:42:04.382014495Z" level=error msg="Failed to destroy network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.383165 containerd[1464]: time="2026-01-20T00:42:04.383072595Z" level=error msg="encountered an error cleaning up failed sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.383270 containerd[1464]: time="2026-01-20T00:42:04.383180702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c8bcc,Uid:16f84c69-9c06-4195-a968-2c29cf809ca6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.384153 kubelet[2534]: E0120 00:42:04.384027 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.384225 kubelet[2534]: E0120 00:42:04.384145 2534 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-c8bcc" Jan 20 00:42:04.384225 kubelet[2534]: E0120 00:42:04.384201 2534 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-c8bcc" Jan 20 00:42:04.384487 kubelet[2534]: E0120 00:42:04.384317 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-c8bcc_calico-system(16f84c69-9c06-4195-a968-2c29cf809ca6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-c8bcc_calico-system(16f84c69-9c06-4195-a968-2c29cf809ca6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:42:04.499286 systemd[1]: Created slice kubepods-besteffort-pod28fdd63b_baae_4d6e_b08c_1195d95658e8.slice - libcontainer container kubepods-besteffort-pod28fdd63b_baae_4d6e_b08c_1195d95658e8.slice. Jan 20 00:42:04.506250 containerd[1464]: time="2026-01-20T00:42:04.506197260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7994,Uid:28fdd63b-baae-4d6e-b08c-1195d95658e8,Namespace:calico-system,Attempt:0,}" Jan 20 00:42:04.612207 containerd[1464]: time="2026-01-20T00:42:04.611955730Z" level=error msg="Failed to destroy network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.614890 containerd[1464]: time="2026-01-20T00:42:04.614736131Z" level=error msg="encountered an error cleaning up failed sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.614890 containerd[1464]: time="2026-01-20T00:42:04.614802053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7994,Uid:28fdd63b-baae-4d6e-b08c-1195d95658e8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.614853 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937-shm.mount: Deactivated successfully. Jan 20 00:42:04.616452 kubelet[2534]: E0120 00:42:04.615762 2534 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.616452 kubelet[2534]: E0120 00:42:04.615841 2534 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7994" Jan 20 00:42:04.616452 kubelet[2534]: E0120 00:42:04.615869 2534 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q7994" Jan 20 00:42:04.616594 kubelet[2534]: E0120 00:42:04.615945 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q7994_calico-system(28fdd63b-baae-4d6e-b08c-1195d95658e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q7994_calico-system(28fdd63b-baae-4d6e-b08c-1195d95658e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:42:04.684487 kubelet[2534]: I0120 00:42:04.684331 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:04.688910 kubelet[2534]: I0120 00:42:04.688864 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:04.690043 containerd[1464]: time="2026-01-20T00:42:04.689841052Z" level=info msg="StopPodSandbox for \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\"" Jan 20 00:42:04.690867 containerd[1464]: time="2026-01-20T00:42:04.690723757Z" level=info msg="Ensure that sandbox aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e in task-service has been cleanup successfully" Jan 20 00:42:04.691305 containerd[1464]: time="2026-01-20T00:42:04.691170782Z" level=info msg="StopPodSandbox for \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\"" Jan 20 00:42:04.691619 containerd[1464]: time="2026-01-20T00:42:04.691496678Z" level=info msg="Ensure that sandbox 129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937 in task-service has been cleanup successfully" Jan 20 00:42:04.695438 kubelet[2534]: I0120 00:42:04.695303 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:04.696048 containerd[1464]: time="2026-01-20T00:42:04.695979462Z" level=info msg="StopPodSandbox for \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\"" Jan 20 00:42:04.697033 containerd[1464]: time="2026-01-20T00:42:04.696609960Z" level=info msg="Ensure that sandbox 9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440 in task-service has been cleanup successfully" Jan 20 00:42:04.700235 kubelet[2534]: I0120 00:42:04.700212 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:04.703796 containerd[1464]: time="2026-01-20T00:42:04.703645208Z" level=info msg="StopPodSandbox for \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\"" Jan 20 00:42:04.703964 containerd[1464]: time="2026-01-20T00:42:04.703929330Z" level=info msg="Ensure that sandbox 3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12 in task-service has been cleanup successfully" Jan 20 00:42:04.715202 kubelet[2534]: E0120 00:42:04.715076 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:04.718582 containerd[1464]: time="2026-01-20T00:42:04.718124557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 00:42:04.724536 kubelet[2534]: I0120 00:42:04.724284 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:04.726843 containerd[1464]: time="2026-01-20T00:42:04.726703591Z" level=info msg="StopPodSandbox for \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\"" Jan 20 00:42:04.727245 containerd[1464]: time="2026-01-20T00:42:04.726987442Z" level=info msg="Ensure that sandbox d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca in task-service has been cleanup successfully" Jan 20 00:42:04.736781 kubelet[2534]: I0120 00:42:04.736565 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:04.741898 containerd[1464]: time="2026-01-20T00:42:04.741601200Z" level=info msg="StopPodSandbox for \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\"" Jan 20 00:42:04.742344 containerd[1464]: time="2026-01-20T00:42:04.742112739Z" level=info msg="Ensure that sandbox 312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9 in task-service has been cleanup successfully" Jan 20 00:42:04.758146 kubelet[2534]: I0120 00:42:04.757998 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:04.766864 containerd[1464]: time="2026-01-20T00:42:04.766263162Z" level=info msg="StopPodSandbox for \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\"" Jan 20 00:42:04.766864 containerd[1464]: time="2026-01-20T00:42:04.766567496Z" level=info msg="Ensure that sandbox 14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf in task-service has been cleanup successfully" Jan 20 00:42:04.773457 kubelet[2534]: I0120 00:42:04.773331 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:04.777110 containerd[1464]: time="2026-01-20T00:42:04.777072506Z" level=info msg="StopPodSandbox for \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\"" Jan 20 00:42:04.780131 containerd[1464]: time="2026-01-20T00:42:04.780108035Z" level=info msg="Ensure that sandbox 705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e in task-service has been cleanup successfully" Jan 20 00:42:04.788097 containerd[1464]: time="2026-01-20T00:42:04.787985241Z" level=error msg="StopPodSandbox for \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\" failed" error="failed to destroy network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.788971 kubelet[2534]: E0120 00:42:04.788845 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:04.789612 kubelet[2534]: E0120 00:42:04.789484 2534 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e"} Jan 20 00:42:04.789788 kubelet[2534]: E0120 00:42:04.789772 2534 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16f84c69-9c06-4195-a968-2c29cf809ca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:42:04.790285 kubelet[2534]: E0120 00:42:04.790147 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16f84c69-9c06-4195-a968-2c29cf809ca6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:42:04.809905 containerd[1464]: time="2026-01-20T00:42:04.809841451Z" level=error msg="StopPodSandbox for \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\" failed" error="failed to destroy network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.810836 kubelet[2534]: E0120 00:42:04.810633 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:04.810836 kubelet[2534]: E0120 00:42:04.810699 2534 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440"} Jan 20 00:42:04.810836 kubelet[2534]: E0120 00:42:04.810748 2534 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7b5c492-f30c-4416-b600-12afd3fc29bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:42:04.810836 kubelet[2534]: E0120 00:42:04.810790 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7b5c492-f30c-4416-b600-12afd3fc29bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" podUID="a7b5c492-f30c-4416-b600-12afd3fc29bc" Jan 20 00:42:04.818078 containerd[1464]: time="2026-01-20T00:42:04.817985910Z" level=error msg="StopPodSandbox for \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\" failed" error="failed to destroy network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.818898 kubelet[2534]: E0120 00:42:04.818838 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:04.819167 kubelet[2534]: E0120 00:42:04.819140 2534 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937"} Jan 20 00:42:04.819357 kubelet[2534]: E0120 00:42:04.819267 2534 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28fdd63b-baae-4d6e-b08c-1195d95658e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:42:04.819357 kubelet[2534]: E0120 00:42:04.819313 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28fdd63b-baae-4d6e-b08c-1195d95658e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:42:04.851150 containerd[1464]: time="2026-01-20T00:42:04.851095497Z" level=error msg="StopPodSandbox for \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\" failed" error="failed to destroy network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.852330 kubelet[2534]: E0120 00:42:04.852277 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:04.852629 kubelet[2534]: E0120 00:42:04.852602 2534 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12"} Jan 20 00:42:04.852760 kubelet[2534]: E0120 00:42:04.852736 2534 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"601adc65-0461-4784-a3c3-b551c4a085b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:42:04.852964 kubelet[2534]: E0120 00:42:04.852933 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"601adc65-0461-4784-a3c3-b551c4a085b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-c474g" podUID="601adc65-0461-4784-a3c3-b551c4a085b3" Jan 20 00:42:04.861321 containerd[1464]: time="2026-01-20T00:42:04.861275110Z" level=error msg="StopPodSandbox for \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\" failed" error="failed to destroy network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.861921 kubelet[2534]: E0120 00:42:04.861882 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:04.862203 kubelet[2534]: E0120 00:42:04.862174 2534 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9"} Jan 20 00:42:04.862516 kubelet[2534]: E0120 00:42:04.862356 2534 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88df5820-4d41-412b-bac0-45d81ef0f210\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:42:04.862855 kubelet[2534]: E0120 00:42:04.862691 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88df5820-4d41-412b-bac0-45d81ef0f210\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59d4dcb77c-t6w5v" podUID="88df5820-4d41-412b-bac0-45d81ef0f210" Jan 20 00:42:04.873893 containerd[1464]: time="2026-01-20T00:42:04.873748655Z" level=error msg="StopPodSandbox for \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\" failed" error="failed to destroy network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.874281 kubelet[2534]: E0120 00:42:04.874145 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:04.874281 kubelet[2534]: E0120 00:42:04.874235 2534 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e"} Jan 20 00:42:04.874281 kubelet[2534]: E0120 00:42:04.874282 2534 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a912c1f3-a959-479f-b8c4-402f78743287\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:42:04.874718 kubelet[2534]: E0120 00:42:04.874325 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a912c1f3-a959-479f-b8c4-402f78743287\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wssqv" podUID="a912c1f3-a959-479f-b8c4-402f78743287" Jan 20 00:42:04.879575 containerd[1464]: time="2026-01-20T00:42:04.879487717Z" level=error msg="StopPodSandbox for \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\" failed" error="failed to destroy network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.879973 kubelet[2534]: E0120 00:42:04.879890 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:04.880103 kubelet[2534]: E0120 00:42:04.879992 2534 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca"} Jan 20 00:42:04.880228 kubelet[2534]: E0120 00:42:04.880094 2534 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94df424b-d767-451c-98d9-6b195890f32a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:42:04.880646 kubelet[2534]: E0120 00:42:04.880311 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94df424b-d767-451c-98d9-6b195890f32a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:42:04.890839 containerd[1464]: time="2026-01-20T00:42:04.890638155Z" level=error msg="StopPodSandbox for \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\" failed" error="failed to destroy network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 00:42:04.891746 kubelet[2534]: E0120 00:42:04.891522 2534 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:04.891746 kubelet[2534]: E0120 00:42:04.891624 2534 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf"} Jan 20 00:42:04.891746 kubelet[2534]: E0120 00:42:04.891679 2534 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"183d032b-800c-4fa4-8adf-a6b6125809b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 00:42:04.891746 kubelet[2534]: E0120 00:42:04.891723 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"183d032b-800c-4fa4-8adf-a6b6125809b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" podUID="183d032b-800c-4fa4-8adf-a6b6125809b8" Jan 20 00:42:10.572486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1973485828.mount: Deactivated successfully. Jan 20 00:42:10.868247 containerd[1464]: time="2026-01-20T00:42:10.868027624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:42:10.887231 containerd[1464]: time="2026-01-20T00:42:10.887100330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 00:42:10.900686 containerd[1464]: time="2026-01-20T00:42:10.900608156Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:42:10.917932 containerd[1464]: time="2026-01-20T00:42:10.917746608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.199561539s" Jan 20 00:42:10.917932 containerd[1464]: time="2026-01-20T00:42:10.917836037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 00:42:10.918866 containerd[1464]: time="2026-01-20T00:42:10.918804582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 00:42:10.939706 containerd[1464]: time="2026-01-20T00:42:10.939562472Z" level=info msg="CreateContainer within sandbox \"f8a51fb0f21e6d26bdc5cd0ee2973ebb97bd694f81a513c0de9cb1dec46a4c91\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 00:42:10.993869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130144696.mount: Deactivated successfully. Jan 20 00:42:11.016262 containerd[1464]: time="2026-01-20T00:42:11.016032303Z" level=info msg="CreateContainer within sandbox \"f8a51fb0f21e6d26bdc5cd0ee2973ebb97bd694f81a513c0de9cb1dec46a4c91\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8fb9fb9c455d2ae6d763ddba4785dd1a74837695de16924ab70b4cf82e734a1d\"" Jan 20 00:42:11.019483 containerd[1464]: time="2026-01-20T00:42:11.017797729Z" level=info msg="StartContainer for \"8fb9fb9c455d2ae6d763ddba4785dd1a74837695de16924ab70b4cf82e734a1d\"" Jan 20 00:42:11.147841 systemd[1]: Started cri-containerd-8fb9fb9c455d2ae6d763ddba4785dd1a74837695de16924ab70b4cf82e734a1d.scope - libcontainer container 8fb9fb9c455d2ae6d763ddba4785dd1a74837695de16924ab70b4cf82e734a1d. Jan 20 00:42:11.219818 containerd[1464]: time="2026-01-20T00:42:11.219680354Z" level=info msg="StartContainer for \"8fb9fb9c455d2ae6d763ddba4785dd1a74837695de16924ab70b4cf82e734a1d\" returns successfully" Jan 20 00:42:11.462741 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 00:42:11.464153 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 00:42:11.718766 containerd[1464]: time="2026-01-20T00:42:11.718108264Z" level=info msg="StopPodSandbox for \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\"" Jan 20 00:42:11.834353 kubelet[2534]: E0120 00:42:11.833588 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:11.886215 kubelet[2534]: I0120 00:42:11.883554 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2bh5j" podStartSLOduration=2.352284368 podStartE2EDuration="18.883532537s" podCreationTimestamp="2026-01-20 00:41:53 +0000 UTC" firstStartedPulling="2026-01-20 00:41:54.388921752 +0000 UTC m=+22.140355164" lastFinishedPulling="2026-01-20 00:42:10.92016993 +0000 UTC m=+38.671603333" observedRunningTime="2026-01-20 00:42:11.881333105 +0000 UTC m=+39.632766537" watchObservedRunningTime="2026-01-20 00:42:11.883532537 +0000 UTC m=+39.634965959" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:11.883 [INFO][3749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:11.884 [INFO][3749] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" iface="eth0" netns="/var/run/netns/cni-66ea7e2c-ba7d-fab7-6021-0343b0026a98" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:11.889 [INFO][3749] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" iface="eth0" netns="/var/run/netns/cni-66ea7e2c-ba7d-fab7-6021-0343b0026a98" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:11.891 [INFO][3749] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" iface="eth0" netns="/var/run/netns/cni-66ea7e2c-ba7d-fab7-6021-0343b0026a98" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:11.891 [INFO][3749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:11.891 [INFO][3749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:12.121 [INFO][3758] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" HandleID="k8s-pod-network.312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Workload="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:12.124 [INFO][3758] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:12.124 [INFO][3758] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:12.137 [WARNING][3758] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" HandleID="k8s-pod-network.312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Workload="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:12.137 [INFO][3758] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" HandleID="k8s-pod-network.312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Workload="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:12.140 [INFO][3758] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:12.147344 containerd[1464]: 2026-01-20 00:42:12.144 [INFO][3749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:12.150780 containerd[1464]: time="2026-01-20T00:42:12.150693683Z" level=info msg="TearDown network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\" successfully" Jan 20 00:42:12.150833 containerd[1464]: time="2026-01-20T00:42:12.150780348Z" level=info msg="StopPodSandbox for \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\" returns successfully" Jan 20 00:42:12.152861 systemd[1]: run-netns-cni\x2d66ea7e2c\x2dba7d\x2dfab7\x2d6021\x2d0343b0026a98.mount: Deactivated successfully. Jan 20 00:42:12.280339 kubelet[2534]: I0120 00:42:12.280042 2534 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88df5820-4d41-412b-bac0-45d81ef0f210-whisker-backend-key-pair\") pod \"88df5820-4d41-412b-bac0-45d81ef0f210\" (UID: \"88df5820-4d41-412b-bac0-45d81ef0f210\") " Jan 20 00:42:12.280661 kubelet[2534]: I0120 00:42:12.280487 2534 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn5qx\" (UniqueName: \"kubernetes.io/projected/88df5820-4d41-412b-bac0-45d81ef0f210-kube-api-access-dn5qx\") pod \"88df5820-4d41-412b-bac0-45d81ef0f210\" (UID: \"88df5820-4d41-412b-bac0-45d81ef0f210\") " Jan 20 00:42:12.280661 kubelet[2534]: I0120 00:42:12.280520 2534 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88df5820-4d41-412b-bac0-45d81ef0f210-whisker-ca-bundle\") pod \"88df5820-4d41-412b-bac0-45d81ef0f210\" (UID: \"88df5820-4d41-412b-bac0-45d81ef0f210\") " Jan 20 00:42:12.281514 kubelet[2534]: I0120 00:42:12.281211 2534 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88df5820-4d41-412b-bac0-45d81ef0f210-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "88df5820-4d41-412b-bac0-45d81ef0f210" (UID: "88df5820-4d41-412b-bac0-45d81ef0f210"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 00:42:12.291852 kubelet[2534]: I0120 00:42:12.291748 2534 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88df5820-4d41-412b-bac0-45d81ef0f210-kube-api-access-dn5qx" (OuterVolumeSpecName: "kube-api-access-dn5qx") pod "88df5820-4d41-412b-bac0-45d81ef0f210" (UID: "88df5820-4d41-412b-bac0-45d81ef0f210"). InnerVolumeSpecName "kube-api-access-dn5qx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 00:42:12.293990 systemd[1]: var-lib-kubelet-pods-88df5820\x2d4d41\x2d412b\x2dbac0\x2d45d81ef0f210-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddn5qx.mount: Deactivated successfully. Jan 20 00:42:12.294981 kubelet[2534]: I0120 00:42:12.294237 2534 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88df5820-4d41-412b-bac0-45d81ef0f210-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "88df5820-4d41-412b-bac0-45d81ef0f210" (UID: "88df5820-4d41-412b-bac0-45d81ef0f210"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 00:42:12.294256 systemd[1]: var-lib-kubelet-pods-88df5820\x2d4d41\x2d412b\x2dbac0\x2d45d81ef0f210-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 00:42:12.381761 kubelet[2534]: I0120 00:42:12.381633 2534 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88df5820-4d41-412b-bac0-45d81ef0f210-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 00:42:12.381761 kubelet[2534]: I0120 00:42:12.381717 2534 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dn5qx\" (UniqueName: \"kubernetes.io/projected/88df5820-4d41-412b-bac0-45d81ef0f210-kube-api-access-dn5qx\") on node \"localhost\" DevicePath \"\"" Jan 20 00:42:12.381761 kubelet[2534]: I0120 00:42:12.381733 2534 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88df5820-4d41-412b-bac0-45d81ef0f210-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 00:42:12.516027 systemd[1]: Removed slice kubepods-besteffort-pod88df5820_4d41_412b_bac0_45d81ef0f210.slice - libcontainer container kubepods-besteffort-pod88df5820_4d41_412b_bac0_45d81ef0f210.slice. Jan 20 00:42:12.996026 systemd[1]: Created slice kubepods-besteffort-pod8c80f384_368b_4ca9_94db_bbaff86a3f79.slice - libcontainer container kubepods-besteffort-pod8c80f384_368b_4ca9_94db_bbaff86a3f79.slice. Jan 20 00:42:13.088163 kubelet[2534]: I0120 00:42:13.087969 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c80f384-368b-4ca9-94db-bbaff86a3f79-whisker-ca-bundle\") pod \"whisker-7dcfd46877-pcvss\" (UID: \"8c80f384-368b-4ca9-94db-bbaff86a3f79\") " pod="calico-system/whisker-7dcfd46877-pcvss" Jan 20 00:42:13.088163 kubelet[2534]: I0120 00:42:13.088137 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c80f384-368b-4ca9-94db-bbaff86a3f79-whisker-backend-key-pair\") pod \"whisker-7dcfd46877-pcvss\" (UID: \"8c80f384-368b-4ca9-94db-bbaff86a3f79\") " pod="calico-system/whisker-7dcfd46877-pcvss" Jan 20 00:42:13.089026 kubelet[2534]: I0120 00:42:13.088224 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjmz\" (UniqueName: \"kubernetes.io/projected/8c80f384-368b-4ca9-94db-bbaff86a3f79-kube-api-access-4fjmz\") pod \"whisker-7dcfd46877-pcvss\" (UID: \"8c80f384-368b-4ca9-94db-bbaff86a3f79\") " pod="calico-system/whisker-7dcfd46877-pcvss" Jan 20 00:42:13.313853 containerd[1464]: time="2026-01-20T00:42:13.313660205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dcfd46877-pcvss,Uid:8c80f384-368b-4ca9-94db-bbaff86a3f79,Namespace:calico-system,Attempt:0,}" Jan 20 00:42:13.762698 systemd-networkd[1401]: calif48e8b7941b: Link UP Jan 20 00:42:13.772029 systemd-networkd[1401]: calif48e8b7941b: Gained carrier Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.426 [INFO][3875] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.502 [INFO][3875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7dcfd46877--pcvss-eth0 whisker-7dcfd46877- calico-system 8c80f384-368b-4ca9-94db-bbaff86a3f79 979 0 2026-01-20 00:42:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7dcfd46877 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7dcfd46877-pcvss eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif48e8b7941b [] [] }} ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Namespace="calico-system" Pod="whisker-7dcfd46877-pcvss" WorkloadEndpoint="localhost-k8s-whisker--7dcfd46877--pcvss-" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.504 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Namespace="calico-system" Pod="whisker-7dcfd46877-pcvss" WorkloadEndpoint="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.596 [INFO][3896] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" HandleID="k8s-pod-network.9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Workload="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.600 [INFO][3896] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" HandleID="k8s-pod-network.9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Workload="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139b50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7dcfd46877-pcvss", "timestamp":"2026-01-20 00:42:13.595999691 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.600 [INFO][3896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.601 [INFO][3896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.601 [INFO][3896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.624 [INFO][3896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" host="localhost" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.656 [INFO][3896] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.676 [INFO][3896] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.679 [INFO][3896] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.690 [INFO][3896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.690 [INFO][3896] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" host="localhost" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.698 [INFO][3896] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.707 [INFO][3896] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" host="localhost" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.728 [INFO][3896] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" host="localhost" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.729 [INFO][3896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" host="localhost" Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.729 [INFO][3896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:13.815464 containerd[1464]: 2026-01-20 00:42:13.729 [INFO][3896] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" HandleID="k8s-pod-network.9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Workload="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" Jan 20 00:42:13.816646 containerd[1464]: 2026-01-20 00:42:13.734 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Namespace="calico-system" Pod="whisker-7dcfd46877-pcvss" WorkloadEndpoint="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dcfd46877--pcvss-eth0", GenerateName:"whisker-7dcfd46877-", Namespace:"calico-system", SelfLink:"", UID:"8c80f384-368b-4ca9-94db-bbaff86a3f79", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 42, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dcfd46877", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7dcfd46877-pcvss", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif48e8b7941b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:13.816646 containerd[1464]: 2026-01-20 00:42:13.734 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Namespace="calico-system" Pod="whisker-7dcfd46877-pcvss" WorkloadEndpoint="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" Jan 20 00:42:13.816646 containerd[1464]: 2026-01-20 00:42:13.734 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif48e8b7941b ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Namespace="calico-system" Pod="whisker-7dcfd46877-pcvss" WorkloadEndpoint="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" Jan 20 00:42:13.816646 containerd[1464]: 2026-01-20 00:42:13.769 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Namespace="calico-system" Pod="whisker-7dcfd46877-pcvss" WorkloadEndpoint="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" Jan 20 00:42:13.816646 containerd[1464]: 2026-01-20 00:42:13.770 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Namespace="calico-system" Pod="whisker-7dcfd46877-pcvss" WorkloadEndpoint="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dcfd46877--pcvss-eth0", GenerateName:"whisker-7dcfd46877-", Namespace:"calico-system", SelfLink:"", UID:"8c80f384-368b-4ca9-94db-bbaff86a3f79", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 42, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dcfd46877", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a", Pod:"whisker-7dcfd46877-pcvss", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif48e8b7941b", MAC:"f6:b2:d8:da:48:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:13.816646 containerd[1464]: 2026-01-20 00:42:13.799 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a" Namespace="calico-system" Pod="whisker-7dcfd46877-pcvss" WorkloadEndpoint="localhost-k8s-whisker--7dcfd46877--pcvss-eth0" Jan 20 00:42:13.866527 kernel: bpftool[3945]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 00:42:13.911124 containerd[1464]: time="2026-01-20T00:42:13.910005368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:42:13.911124 containerd[1464]: time="2026-01-20T00:42:13.910570919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:42:13.911124 containerd[1464]: time="2026-01-20T00:42:13.910607532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:13.911124 containerd[1464]: time="2026-01-20T00:42:13.910751140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:13.961096 systemd[1]: Started cri-containerd-9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a.scope - libcontainer container 9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a. Jan 20 00:42:14.004527 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:42:14.066509 containerd[1464]: time="2026-01-20T00:42:14.065793777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dcfd46877-pcvss,Uid:8c80f384-368b-4ca9-94db-bbaff86a3f79,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e7a5162b14f2d22d95f825af2e720496cf28c80292b785e78ec3899f7d2a24a\"" Jan 20 00:42:14.075816 containerd[1464]: time="2026-01-20T00:42:14.075694639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:42:14.151076 containerd[1464]: time="2026-01-20T00:42:14.150937705Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:14.153539 containerd[1464]: time="2026-01-20T00:42:14.153416035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:42:14.184052 containerd[1464]: time="2026-01-20T00:42:14.160515981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:42:14.185221 kubelet[2534]: E0120 00:42:14.184609 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:14.185221 kubelet[2534]: E0120 00:42:14.184703 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:14.185221 kubelet[2534]: E0120 00:42:14.184824 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7dcfd46877-pcvss_calico-system(8c80f384-368b-4ca9-94db-bbaff86a3f79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:14.190115 containerd[1464]: time="2026-01-20T00:42:14.189462427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:42:14.265115 containerd[1464]: time="2026-01-20T00:42:14.264791557Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:14.274220 containerd[1464]: time="2026-01-20T00:42:14.273700911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:42:14.274220 containerd[1464]: time="2026-01-20T00:42:14.273923366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:42:14.275198 kubelet[2534]: E0120 00:42:14.275112 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:14.275495 kubelet[2534]: E0120 00:42:14.275208 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:14.275495 kubelet[2534]: E0120 00:42:14.275466 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7dcfd46877-pcvss_calico-system(8c80f384-368b-4ca9-94db-bbaff86a3f79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:14.275724 kubelet[2534]: E0120 00:42:14.275531 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dcfd46877-pcvss" podUID="8c80f384-368b-4ca9-94db-bbaff86a3f79" Jan 20 00:42:14.282933 systemd-networkd[1401]: vxlan.calico: Link UP Jan 20 00:42:14.282943 systemd-networkd[1401]: vxlan.calico: Gained carrier Jan 20 00:42:14.493054 kubelet[2534]: I0120 00:42:14.492901 2534 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88df5820-4d41-412b-bac0-45d81ef0f210" path="/var/lib/kubelet/pods/88df5820-4d41-412b-bac0-45d81ef0f210/volumes" Jan 20 00:42:14.857951 kubelet[2534]: E0120 00:42:14.857807 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dcfd46877-pcvss" podUID="8c80f384-368b-4ca9-94db-bbaff86a3f79" Jan 20 00:42:15.451810 systemd-networkd[1401]: calif48e8b7941b: Gained IPv6LL Jan 20 00:42:15.861110 kubelet[2534]: E0120 00:42:15.860870 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dcfd46877-pcvss" podUID="8c80f384-368b-4ca9-94db-bbaff86a3f79" Jan 20 00:42:16.093205 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Jan 20 00:42:16.491556 containerd[1464]: time="2026-01-20T00:42:16.490769314Z" level=info msg="StopPodSandbox for \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\"" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.575 [INFO][4075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.575 [INFO][4075] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" iface="eth0" netns="/var/run/netns/cni-ae33cda4-e9e6-9636-c5c7-bfaf8ad54056" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.577 [INFO][4075] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" iface="eth0" netns="/var/run/netns/cni-ae33cda4-e9e6-9636-c5c7-bfaf8ad54056" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.577 [INFO][4075] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" iface="eth0" netns="/var/run/netns/cni-ae33cda4-e9e6-9636-c5c7-bfaf8ad54056" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.577 [INFO][4075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.577 [INFO][4075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.617 [INFO][4084] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" HandleID="k8s-pod-network.3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.617 [INFO][4084] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.617 [INFO][4084] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.626 [WARNING][4084] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" HandleID="k8s-pod-network.3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.626 [INFO][4084] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" HandleID="k8s-pod-network.3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.629 [INFO][4084] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:16.635465 containerd[1464]: 2026-01-20 00:42:16.632 [INFO][4075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:16.635895 containerd[1464]: time="2026-01-20T00:42:16.635860915Z" level=info msg="TearDown network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\" successfully" Jan 20 00:42:16.635895 containerd[1464]: time="2026-01-20T00:42:16.635887631Z" level=info msg="StopPodSandbox for \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\" returns successfully" Jan 20 00:42:16.639889 systemd[1]: run-netns-cni\x2dae33cda4\x2de9e6\x2d9636\x2dc5c7\x2dbfaf8ad54056.mount: Deactivated successfully. Jan 20 00:42:16.646447 kubelet[2534]: E0120 00:42:16.646243 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:16.647229 containerd[1464]: time="2026-01-20T00:42:16.647114649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c474g,Uid:601adc65-0461-4784-a3c3-b551c4a085b3,Namespace:kube-system,Attempt:1,}" Jan 20 00:42:16.850766 systemd-networkd[1401]: cali3b0fc346547: Link UP Jan 20 00:42:16.852007 systemd-networkd[1401]: cali3b0fc346547: Gained carrier Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.743 [INFO][4095] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--c474g-eth0 coredns-66bc5c9577- kube-system 601adc65-0461-4784-a3c3-b551c4a085b3 1011 0 2026-01-20 00:41:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-c474g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3b0fc346547 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Namespace="kube-system" Pod="coredns-66bc5c9577-c474g" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c474g-" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.744 [INFO][4095] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Namespace="kube-system" Pod="coredns-66bc5c9577-c474g" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.790 [INFO][4106] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" HandleID="k8s-pod-network.7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.791 [INFO][4106] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" HandleID="k8s-pod-network.7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135550), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-c474g", "timestamp":"2026-01-20 00:42:16.790273499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.791 [INFO][4106] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.791 [INFO][4106] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.791 [INFO][4106] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.801 [INFO][4106] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" host="localhost" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.808 [INFO][4106] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.815 [INFO][4106] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.819 [INFO][4106] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.823 [INFO][4106] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.823 [INFO][4106] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" host="localhost" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.825 [INFO][4106] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613 Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.831 [INFO][4106] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" host="localhost" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.843 [INFO][4106] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" host="localhost" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.843 [INFO][4106] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" host="localhost" Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.843 [INFO][4106] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:16.876260 containerd[1464]: 2026-01-20 00:42:16.843 [INFO][4106] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" HandleID="k8s-pod-network.7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.877678 containerd[1464]: 2026-01-20 00:42:16.846 [INFO][4095] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Namespace="kube-system" Pod="coredns-66bc5c9577-c474g" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c474g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--c474g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"601adc65-0461-4784-a3c3-b551c4a085b3", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-c474g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b0fc346547", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:16.877678 containerd[1464]: 2026-01-20 00:42:16.846 [INFO][4095] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Namespace="kube-system" Pod="coredns-66bc5c9577-c474g" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.877678 containerd[1464]: 2026-01-20 00:42:16.847 [INFO][4095] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b0fc346547 ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Namespace="kube-system" Pod="coredns-66bc5c9577-c474g" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.877678 containerd[1464]: 2026-01-20 00:42:16.851 [INFO][4095] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Namespace="kube-system" Pod="coredns-66bc5c9577-c474g" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.877678 containerd[1464]: 2026-01-20 00:42:16.854 [INFO][4095] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Namespace="kube-system" Pod="coredns-66bc5c9577-c474g" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c474g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--c474g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"601adc65-0461-4784-a3c3-b551c4a085b3", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613", Pod:"coredns-66bc5c9577-c474g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b0fc346547", MAC:"7a:fa:69:fa:d8:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:16.877678 containerd[1464]: 2026-01-20 00:42:16.871 [INFO][4095] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613" Namespace="kube-system" Pod="coredns-66bc5c9577-c474g" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:16.923919 containerd[1464]: time="2026-01-20T00:42:16.923682357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:42:16.923919 containerd[1464]: time="2026-01-20T00:42:16.923796012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:42:16.923919 containerd[1464]: time="2026-01-20T00:42:16.923813111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:16.929760 containerd[1464]: time="2026-01-20T00:42:16.929253499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:16.984234 systemd[1]: Started cri-containerd-7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613.scope - libcontainer container 7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613. Jan 20 00:42:17.008061 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:42:17.059908 containerd[1464]: time="2026-01-20T00:42:17.059695745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-c474g,Uid:601adc65-0461-4784-a3c3-b551c4a085b3,Namespace:kube-system,Attempt:1,} returns sandbox id \"7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613\"" Jan 20 00:42:17.060813 kubelet[2534]: E0120 00:42:17.060758 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:17.071803 containerd[1464]: time="2026-01-20T00:42:17.071634195Z" level=info msg="CreateContainer within sandbox \"7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:42:17.105999 containerd[1464]: time="2026-01-20T00:42:17.105679951Z" level=info msg="CreateContainer within sandbox \"7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc5059b2566f109ec292dd26ba636563b69ab07f33126635747407c769a58ba9\"" Jan 20 00:42:17.107639 containerd[1464]: time="2026-01-20T00:42:17.107605667Z" level=info msg="StartContainer for \"cc5059b2566f109ec292dd26ba636563b69ab07f33126635747407c769a58ba9\"" Jan 20 00:42:17.176974 systemd[1]: Started cri-containerd-cc5059b2566f109ec292dd26ba636563b69ab07f33126635747407c769a58ba9.scope - libcontainer container cc5059b2566f109ec292dd26ba636563b69ab07f33126635747407c769a58ba9. Jan 20 00:42:17.228734 containerd[1464]: time="2026-01-20T00:42:17.228687591Z" level=info msg="StartContainer for \"cc5059b2566f109ec292dd26ba636563b69ab07f33126635747407c769a58ba9\" returns successfully" Jan 20 00:42:17.868977 kubelet[2534]: E0120 00:42:17.867815 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:17.908624 kubelet[2534]: I0120 00:42:17.904918 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c474g" podStartSLOduration=41.904893261 podStartE2EDuration="41.904893261s" podCreationTimestamp="2026-01-20 00:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:42:17.883585757 +0000 UTC m=+45.635019190" watchObservedRunningTime="2026-01-20 00:42:17.904893261 +0000 UTC m=+45.656326693" Jan 20 00:42:18.395897 systemd-networkd[1401]: cali3b0fc346547: Gained IPv6LL Jan 20 00:42:18.493782 containerd[1464]: time="2026-01-20T00:42:18.493720166Z" level=info msg="StopPodSandbox for \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\"" Jan 20 00:42:18.498159 containerd[1464]: time="2026-01-20T00:42:18.495551879Z" level=info msg="StopPodSandbox for \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\"" Jan 20 00:42:18.498640 containerd[1464]: time="2026-01-20T00:42:18.498610930Z" level=info msg="StopPodSandbox for \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\"" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.666 [INFO][4230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.669 [INFO][4230] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" iface="eth0" netns="/var/run/netns/cni-60a28cbf-0649-f6aa-ddce-ca4a847e0ce0" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.669 [INFO][4230] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" iface="eth0" netns="/var/run/netns/cni-60a28cbf-0649-f6aa-ddce-ca4a847e0ce0" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.671 [INFO][4230] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" iface="eth0" netns="/var/run/netns/cni-60a28cbf-0649-f6aa-ddce-ca4a847e0ce0" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.671 [INFO][4230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.671 [INFO][4230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.760 [INFO][4263] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" HandleID="k8s-pod-network.705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.763 [INFO][4263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.764 [INFO][4263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.783 [WARNING][4263] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" HandleID="k8s-pod-network.705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.783 [INFO][4263] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" HandleID="k8s-pod-network.705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.790 [INFO][4263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:18.809093 containerd[1464]: 2026-01-20 00:42:18.800 [INFO][4230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:18.813274 containerd[1464]: time="2026-01-20T00:42:18.812945233Z" level=info msg="TearDown network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\" successfully" Jan 20 00:42:18.813274 containerd[1464]: time="2026-01-20T00:42:18.812992825Z" level=info msg="StopPodSandbox for \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\" returns successfully" Jan 20 00:42:18.820596 systemd[1]: run-netns-cni\x2d60a28cbf\x2d0649\x2df6aa\x2dddce\x2dca4a847e0ce0.mount: Deactivated successfully. Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.728 [INFO][4244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.729 [INFO][4244] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" iface="eth0" netns="/var/run/netns/cni-f163aa1a-5147-2292-42fb-5ae4f981adcd" Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.729 [INFO][4244] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" iface="eth0" netns="/var/run/netns/cni-f163aa1a-5147-2292-42fb-5ae4f981adcd" Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.729 [INFO][4244] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" iface="eth0" netns="/var/run/netns/cni-f163aa1a-5147-2292-42fb-5ae4f981adcd" Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.729 [INFO][4244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.729 [INFO][4244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.772 [INFO][4277] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" HandleID="k8s-pod-network.9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.773 [INFO][4277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.789 [INFO][4277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.801 [WARNING][4277] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" HandleID="k8s-pod-network.9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.801 [INFO][4277] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" HandleID="k8s-pod-network.9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.804 [INFO][4277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:18.829509 containerd[1464]: 2026-01-20 00:42:18.813 [INFO][4244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:18.832339 kubelet[2534]: E0120 00:42:18.832020 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:18.833743 containerd[1464]: time="2026-01-20T00:42:18.833670732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wssqv,Uid:a912c1f3-a959-479f-b8c4-402f78743287,Namespace:kube-system,Attempt:1,}" Jan 20 00:42:18.835183 systemd[1]: run-netns-cni\x2df163aa1a\x2d5147\x2d2292\x2d42fb\x2d5ae4f981adcd.mount: Deactivated successfully. Jan 20 00:42:18.840666 containerd[1464]: time="2026-01-20T00:42:18.837073316Z" level=info msg="TearDown network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\" successfully" Jan 20 00:42:18.840666 containerd[1464]: time="2026-01-20T00:42:18.837188847Z" level=info msg="StopPodSandbox for \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\" returns successfully" Jan 20 00:42:18.862550 containerd[1464]: time="2026-01-20T00:42:18.862527168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b77768bf7-8nrk9,Uid:a7b5c492-f30c-4416-b600-12afd3fc29bc,Namespace:calico-system,Attempt:1,}" Jan 20 00:42:18.873745 kubelet[2534]: E0120 00:42:18.873637 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.701 [INFO][4239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.704 [INFO][4239] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" iface="eth0" netns="/var/run/netns/cni-3ff5df54-b728-54c1-3bb0-18250e2c2555" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.704 [INFO][4239] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" iface="eth0" netns="/var/run/netns/cni-3ff5df54-b728-54c1-3bb0-18250e2c2555" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.705 [INFO][4239] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" iface="eth0" netns="/var/run/netns/cni-3ff5df54-b728-54c1-3bb0-18250e2c2555" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.705 [INFO][4239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.706 [INFO][4239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.796 [INFO][4269] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" HandleID="k8s-pod-network.d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.797 [INFO][4269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.805 [INFO][4269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.845 [WARNING][4269] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" HandleID="k8s-pod-network.d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.845 [INFO][4269] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" HandleID="k8s-pod-network.d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.866 [INFO][4269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:18.891230 containerd[1464]: 2026-01-20 00:42:18.878 [INFO][4239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:18.892168 containerd[1464]: time="2026-01-20T00:42:18.891244629Z" level=info msg="TearDown network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\" successfully" Jan 20 00:42:18.892168 containerd[1464]: time="2026-01-20T00:42:18.891744424Z" level=info msg="StopPodSandbox for \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\" returns successfully" Jan 20 00:42:18.894312 systemd[1]: run-netns-cni\x2d3ff5df54\x2db728\x2d54c1\x2d3bb0\x2d18250e2c2555.mount: Deactivated successfully. Jan 20 00:42:18.904592 containerd[1464]: time="2026-01-20T00:42:18.904319021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdf9c988-97g72,Uid:94df424b-d767-451c-98d9-6b195890f32a,Namespace:calico-apiserver,Attempt:1,}" Jan 20 00:42:19.324916 systemd-networkd[1401]: cali84ea0611d73: Link UP Jan 20 00:42:19.329461 systemd-networkd[1401]: cali84ea0611d73: Gained carrier Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.125 [INFO][4312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0 calico-apiserver-5fcdf9c988- calico-apiserver 94df424b-d767-451c-98d9-6b195890f32a 1037 0 2026-01-20 00:41:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fcdf9c988 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fcdf9c988-97g72 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali84ea0611d73 [] [] }} ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-97g72" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.125 [INFO][4312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-97g72" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.229 [INFO][4338] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" HandleID="k8s-pod-network.598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.231 [INFO][4338] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" HandleID="k8s-pod-network.598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bec30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fcdf9c988-97g72", "timestamp":"2026-01-20 00:42:19.22903907 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.231 [INFO][4338] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.231 [INFO][4338] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.232 [INFO][4338] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.250 [INFO][4338] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" host="localhost" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.260 [INFO][4338] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.270 [INFO][4338] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.274 [INFO][4338] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.279 [INFO][4338] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.279 [INFO][4338] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" host="localhost" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.282 [INFO][4338] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69 Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.295 [INFO][4338] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" host="localhost" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.307 [INFO][4338] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" host="localhost" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.307 [INFO][4338] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" host="localhost" Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.307 [INFO][4338] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:19.369570 containerd[1464]: 2026-01-20 00:42:19.307 [INFO][4338] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" HandleID="k8s-pod-network.598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:19.370658 containerd[1464]: 2026-01-20 00:42:19.314 [INFO][4312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-97g72" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0", GenerateName:"calico-apiserver-5fcdf9c988-", Namespace:"calico-apiserver", SelfLink:"", UID:"94df424b-d767-451c-98d9-6b195890f32a", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdf9c988", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fcdf9c988-97g72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84ea0611d73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:19.370658 containerd[1464]: 2026-01-20 00:42:19.316 [INFO][4312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-97g72" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:19.370658 containerd[1464]: 2026-01-20 00:42:19.316 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84ea0611d73 ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-97g72" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:19.370658 containerd[1464]: 2026-01-20 00:42:19.326 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-97g72" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:19.370658 containerd[1464]: 2026-01-20 00:42:19.327 [INFO][4312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-97g72" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0", GenerateName:"calico-apiserver-5fcdf9c988-", Namespace:"calico-apiserver", SelfLink:"", UID:"94df424b-d767-451c-98d9-6b195890f32a", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdf9c988", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69", Pod:"calico-apiserver-5fcdf9c988-97g72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84ea0611d73", MAC:"aa:46:c3:e5:a5:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:19.370658 containerd[1464]: 2026-01-20 00:42:19.359 [INFO][4312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-97g72" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:19.465251 containerd[1464]: time="2026-01-20T00:42:19.459954267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:42:19.465251 containerd[1464]: time="2026-01-20T00:42:19.460517115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:42:19.465251 containerd[1464]: time="2026-01-20T00:42:19.463658640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:19.462016 systemd-networkd[1401]: cali707c6aeb6df: Link UP Jan 20 00:42:19.464231 systemd-networkd[1401]: cali707c6aeb6df: Gained carrier Jan 20 00:42:19.473901 containerd[1464]: time="2026-01-20T00:42:19.470892161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:19.491672 containerd[1464]: time="2026-01-20T00:42:19.491468933Z" level=info msg="StopPodSandbox for \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\"" Jan 20 00:42:19.491972 containerd[1464]: time="2026-01-20T00:42:19.491885908Z" level=info msg="StopPodSandbox for \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\"" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.111 [INFO][4299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0 calico-kube-controllers-6b77768bf7- calico-system a7b5c492-f30c-4416-b600-12afd3fc29bc 1038 0 2026-01-20 00:41:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b77768bf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6b77768bf7-8nrk9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali707c6aeb6df [] [] }} ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Namespace="calico-system" Pod="calico-kube-controllers-6b77768bf7-8nrk9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.112 [INFO][4299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Namespace="calico-system" Pod="calico-kube-controllers-6b77768bf7-8nrk9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.235 [INFO][4330] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" HandleID="k8s-pod-network.f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.241 [INFO][4330] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" HandleID="k8s-pod-network.f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6b77768bf7-8nrk9", "timestamp":"2026-01-20 00:42:19.235669121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.241 [INFO][4330] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.308 [INFO][4330] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.308 [INFO][4330] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.352 [INFO][4330] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" host="localhost" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.364 [INFO][4330] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.376 [INFO][4330] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.388 [INFO][4330] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.399 [INFO][4330] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.401 [INFO][4330] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" host="localhost" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.404 [INFO][4330] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23 Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.421 [INFO][4330] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" host="localhost" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.433 [INFO][4330] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" host="localhost" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.433 [INFO][4330] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" host="localhost" Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.433 [INFO][4330] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:19.520360 containerd[1464]: 2026-01-20 00:42:19.433 [INFO][4330] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" HandleID="k8s-pod-network.f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:19.522143 containerd[1464]: 2026-01-20 00:42:19.446 [INFO][4299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Namespace="calico-system" Pod="calico-kube-controllers-6b77768bf7-8nrk9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0", GenerateName:"calico-kube-controllers-6b77768bf7-", Namespace:"calico-system", SelfLink:"", UID:"a7b5c492-f30c-4416-b600-12afd3fc29bc", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b77768bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6b77768bf7-8nrk9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali707c6aeb6df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:19.522143 containerd[1464]: 2026-01-20 00:42:19.447 [INFO][4299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Namespace="calico-system" Pod="calico-kube-controllers-6b77768bf7-8nrk9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:19.522143 containerd[1464]: 2026-01-20 00:42:19.447 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali707c6aeb6df ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Namespace="calico-system" Pod="calico-kube-controllers-6b77768bf7-8nrk9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:19.522143 containerd[1464]: 2026-01-20 00:42:19.478 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Namespace="calico-system" Pod="calico-kube-controllers-6b77768bf7-8nrk9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:19.522143 containerd[1464]: 2026-01-20 00:42:19.479 [INFO][4299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Namespace="calico-system" Pod="calico-kube-controllers-6b77768bf7-8nrk9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0", GenerateName:"calico-kube-controllers-6b77768bf7-", Namespace:"calico-system", SelfLink:"", UID:"a7b5c492-f30c-4416-b600-12afd3fc29bc", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b77768bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23", Pod:"calico-kube-controllers-6b77768bf7-8nrk9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali707c6aeb6df", MAC:"2e:3c:52:ba:e0:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:19.522143 containerd[1464]: 2026-01-20 00:42:19.505 [INFO][4299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23" Namespace="calico-system" Pod="calico-kube-controllers-6b77768bf7-8nrk9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:19.556750 systemd[1]: Started cri-containerd-598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69.scope - libcontainer container 598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69. Jan 20 00:42:19.649170 systemd-networkd[1401]: calid3ead960850: Link UP Jan 20 00:42:19.659650 systemd-networkd[1401]: calid3ead960850: Gained carrier Jan 20 00:42:19.663589 containerd[1464]: time="2026-01-20T00:42:19.661646600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:42:19.663589 containerd[1464]: time="2026-01-20T00:42:19.661696638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:42:19.663589 containerd[1464]: time="2026-01-20T00:42:19.661706514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:19.663589 containerd[1464]: time="2026-01-20T00:42:19.661835048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.111 [INFO][4288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--wssqv-eth0 coredns-66bc5c9577- kube-system a912c1f3-a959-479f-b8c4-402f78743287 1036 0 2026-01-20 00:41:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-wssqv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid3ead960850 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Namespace="kube-system" Pod="coredns-66bc5c9577-wssqv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wssqv-" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.112 [INFO][4288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Namespace="kube-system" Pod="coredns-66bc5c9577-wssqv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.249 [INFO][4337] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" HandleID="k8s-pod-network.0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.249 [INFO][4337] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" HandleID="k8s-pod-network.0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-wssqv", "timestamp":"2026-01-20 00:42:19.249052051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.249 [INFO][4337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.434 [INFO][4337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.434 [INFO][4337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.460 [INFO][4337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" host="localhost" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.490 [INFO][4337] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.524 [INFO][4337] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.533 [INFO][4337] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.551 [INFO][4337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.551 [INFO][4337] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" host="localhost" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.562 [INFO][4337] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4 Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.578 [INFO][4337] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" host="localhost" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.600 [INFO][4337] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" host="localhost" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.600 [INFO][4337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" host="localhost" Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.600 [INFO][4337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:19.717613 containerd[1464]: 2026-01-20 00:42:19.600 [INFO][4337] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" HandleID="k8s-pod-network.0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:19.718686 containerd[1464]: 2026-01-20 00:42:19.624 [INFO][4288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Namespace="kube-system" Pod="coredns-66bc5c9577-wssqv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wssqv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a912c1f3-a959-479f-b8c4-402f78743287", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-wssqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3ead960850", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:19.718686 containerd[1464]: 2026-01-20 00:42:19.624 [INFO][4288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Namespace="kube-system" Pod="coredns-66bc5c9577-wssqv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:19.718686 containerd[1464]: 2026-01-20 00:42:19.624 [INFO][4288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3ead960850 ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Namespace="kube-system" Pod="coredns-66bc5c9577-wssqv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:19.718686 containerd[1464]: 2026-01-20 00:42:19.668 [INFO][4288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Namespace="kube-system" Pod="coredns-66bc5c9577-wssqv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:19.718686 containerd[1464]: 2026-01-20 00:42:19.670 [INFO][4288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Namespace="kube-system" Pod="coredns-66bc5c9577-wssqv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wssqv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a912c1f3-a959-479f-b8c4-402f78743287", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4", Pod:"coredns-66bc5c9577-wssqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3ead960850", MAC:"e6:77:f3:86:af:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:19.718686 containerd[1464]: 2026-01-20 00:42:19.701 [INFO][4288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4" Namespace="kube-system" Pod="coredns-66bc5c9577-wssqv" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:19.725546 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:42:19.763993 systemd[1]: Started cri-containerd-f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23.scope - libcontainer container f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23. Jan 20 00:42:19.865544 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:42:19.881966 kubelet[2534]: E0120 00:42:19.881671 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.706 [INFO][4420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.706 [INFO][4420] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" iface="eth0" netns="/var/run/netns/cni-20b7434d-9248-762c-0e0e-b532cd73133a" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.707 [INFO][4420] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" iface="eth0" netns="/var/run/netns/cni-20b7434d-9248-762c-0e0e-b532cd73133a" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.708 [INFO][4420] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" iface="eth0" netns="/var/run/netns/cni-20b7434d-9248-762c-0e0e-b532cd73133a" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.709 [INFO][4420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.709 [INFO][4420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.853 [INFO][4475] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" HandleID="k8s-pod-network.14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.854 [INFO][4475] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.854 [INFO][4475] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.869 [WARNING][4475] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" HandleID="k8s-pod-network.14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.869 [INFO][4475] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" HandleID="k8s-pod-network.14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.874 [INFO][4475] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:19.886808 containerd[1464]: 2026-01-20 00:42:19.879 [INFO][4420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:19.888610 containerd[1464]: time="2026-01-20T00:42:19.887577570Z" level=info msg="TearDown network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\" successfully" Jan 20 00:42:19.888610 containerd[1464]: time="2026-01-20T00:42:19.887652890Z" level=info msg="StopPodSandbox for \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\" returns successfully" Jan 20 00:42:19.892170 systemd[1]: run-netns-cni\x2d20b7434d\x2d9248\x2d762c\x2d0e0e\x2db532cd73133a.mount: Deactivated successfully. Jan 20 00:42:19.906521 containerd[1464]: time="2026-01-20T00:42:19.899541797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:42:19.906521 containerd[1464]: time="2026-01-20T00:42:19.899641220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:42:19.906521 containerd[1464]: time="2026-01-20T00:42:19.899670631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:19.906521 containerd[1464]: time="2026-01-20T00:42:19.900019989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:19.906521 containerd[1464]: time="2026-01-20T00:42:19.903090911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdf9c988-cp8b5,Uid:183d032b-800c-4fa4-8adf-a6b6125809b8,Namespace:calico-apiserver,Attempt:1,}" Jan 20 00:42:19.968093 systemd[1]: Started cri-containerd-0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4.scope - libcontainer container 0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4. Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.790 [INFO][4426] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.790 [INFO][4426] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" iface="eth0" netns="/var/run/netns/cni-15988228-0c35-82e4-f5db-480b60e84054" Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.792 [INFO][4426] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" iface="eth0" netns="/var/run/netns/cni-15988228-0c35-82e4-f5db-480b60e84054" Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.792 [INFO][4426] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" iface="eth0" netns="/var/run/netns/cni-15988228-0c35-82e4-f5db-480b60e84054" Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.793 [INFO][4426] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.793 [INFO][4426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.885 [INFO][4498] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" HandleID="k8s-pod-network.129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.898 [INFO][4498] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.898 [INFO][4498] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.928 [WARNING][4498] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" HandleID="k8s-pod-network.129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.928 [INFO][4498] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" HandleID="k8s-pod-network.129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.932 [INFO][4498] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:19.994555 containerd[1464]: 2026-01-20 00:42:19.960 [INFO][4426] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:19.998481 containerd[1464]: time="2026-01-20T00:42:19.996609076Z" level=info msg="TearDown network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\" successfully" Jan 20 00:42:20.007304 containerd[1464]: time="2026-01-20T00:42:20.007176842Z" level=info msg="StopPodSandbox for \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\" returns successfully" Jan 20 00:42:20.016889 containerd[1464]: time="2026-01-20T00:42:20.016546970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7994,Uid:28fdd63b-baae-4d6e-b08c-1195d95658e8,Namespace:calico-system,Attempt:1,}" Jan 20 00:42:20.027462 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:42:20.056115 containerd[1464]: time="2026-01-20T00:42:20.056025333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdf9c988-97g72,Uid:94df424b-d767-451c-98d9-6b195890f32a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69\"" Jan 20 00:42:20.073103 containerd[1464]: time="2026-01-20T00:42:20.072858996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:20.137340 containerd[1464]: time="2026-01-20T00:42:20.136277245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b77768bf7-8nrk9,Uid:a7b5c492-f30c-4416-b600-12afd3fc29bc,Namespace:calico-system,Attempt:1,} returns sandbox id \"f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23\"" Jan 20 00:42:20.170003 containerd[1464]: time="2026-01-20T00:42:20.169765840Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:20.176452 containerd[1464]: time="2026-01-20T00:42:20.176061041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:42:20.176452 containerd[1464]: time="2026-01-20T00:42:20.176334359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:20.178656 kubelet[2534]: E0120 00:42:20.178137 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:20.180344 kubelet[2534]: E0120 00:42:20.178649 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:20.180650 kubelet[2534]: E0120 00:42:20.180566 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcdf9c988-97g72_calico-apiserver(94df424b-d767-451c-98d9-6b195890f32a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:20.180650 kubelet[2534]: E0120 00:42:20.180604 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:42:20.194025 containerd[1464]: time="2026-01-20T00:42:20.193813207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:42:20.225061 containerd[1464]: time="2026-01-20T00:42:20.224659589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wssqv,Uid:a912c1f3-a959-479f-b8c4-402f78743287,Namespace:kube-system,Attempt:1,} returns sandbox id \"0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4\"" Jan 20 00:42:20.230629 kubelet[2534]: E0120 00:42:20.230592 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:20.260490 containerd[1464]: time="2026-01-20T00:42:20.260331286Z" level=info msg="CreateContainer within sandbox \"0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 00:42:20.296021 containerd[1464]: time="2026-01-20T00:42:20.295780160Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:20.309181 containerd[1464]: time="2026-01-20T00:42:20.309041435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:42:20.309181 containerd[1464]: time="2026-01-20T00:42:20.309168266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:42:20.309945 kubelet[2534]: E0120 00:42:20.309847 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:20.310029 kubelet[2534]: E0120 00:42:20.309963 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:20.310124 kubelet[2534]: E0120 00:42:20.310092 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b77768bf7-8nrk9_calico-system(a7b5c492-f30c-4416-b600-12afd3fc29bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:20.310245 kubelet[2534]: E0120 00:42:20.310151 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" podUID="a7b5c492-f30c-4416-b600-12afd3fc29bc" Jan 20 00:42:20.313336 containerd[1464]: time="2026-01-20T00:42:20.313196630Z" level=info msg="CreateContainer within sandbox \"0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"172933225c5daef38b6e5e7ce5b7325c980281afa70f6616ef230c0577c32561\"" Jan 20 00:42:20.316029 containerd[1464]: time="2026-01-20T00:42:20.315750820Z" level=info msg="StartContainer for \"172933225c5daef38b6e5e7ce5b7325c980281afa70f6616ef230c0577c32561\"" Jan 20 00:42:20.427564 systemd[1]: Started cri-containerd-172933225c5daef38b6e5e7ce5b7325c980281afa70f6616ef230c0577c32561.scope - libcontainer container 172933225c5daef38b6e5e7ce5b7325c980281afa70f6616ef230c0577c32561. Jan 20 00:42:20.474973 systemd-networkd[1401]: cali2c263026a81: Link UP Jan 20 00:42:20.480916 systemd-networkd[1401]: cali2c263026a81: Gained carrier Jan 20 00:42:20.501521 containerd[1464]: time="2026-01-20T00:42:20.500551998Z" level=info msg="StopPodSandbox for \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\"" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.207 [INFO][4550] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0 calico-apiserver-5fcdf9c988- calico-apiserver 183d032b-800c-4fa4-8adf-a6b6125809b8 1054 0 2026-01-20 00:41:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fcdf9c988 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5fcdf9c988-cp8b5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2c263026a81 [] [] }} ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-cp8b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.211 [INFO][4550] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-cp8b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.301 [INFO][4600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" HandleID="k8s-pod-network.cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.302 [INFO][4600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" HandleID="k8s-pod-network.cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5fcdf9c988-cp8b5", "timestamp":"2026-01-20 00:42:20.301937222 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.302 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.302 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.302 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.323 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" host="localhost" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.359 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.374 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.379 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.385 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.387 [INFO][4600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" host="localhost" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.393 [INFO][4600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.411 [INFO][4600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" host="localhost" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.432 [INFO][4600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" host="localhost" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.432 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" host="localhost" Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.432 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:20.518736 containerd[1464]: 2026-01-20 00:42:20.432 [INFO][4600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" HandleID="k8s-pod-network.cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:20.519751 containerd[1464]: 2026-01-20 00:42:20.463 [INFO][4550] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-cp8b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0", GenerateName:"calico-apiserver-5fcdf9c988-", Namespace:"calico-apiserver", SelfLink:"", UID:"183d032b-800c-4fa4-8adf-a6b6125809b8", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdf9c988", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5fcdf9c988-cp8b5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c263026a81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:20.519751 containerd[1464]: 2026-01-20 00:42:20.463 [INFO][4550] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-cp8b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:20.519751 containerd[1464]: 2026-01-20 00:42:20.464 [INFO][4550] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c263026a81 ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-cp8b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:20.519751 containerd[1464]: 2026-01-20 00:42:20.479 [INFO][4550] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-cp8b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:20.519751 containerd[1464]: 2026-01-20 00:42:20.482 [INFO][4550] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-cp8b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0", GenerateName:"calico-apiserver-5fcdf9c988-", Namespace:"calico-apiserver", SelfLink:"", UID:"183d032b-800c-4fa4-8adf-a6b6125809b8", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdf9c988", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b", Pod:"calico-apiserver-5fcdf9c988-cp8b5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c263026a81", MAC:"a2:8b:bc:be:63:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:20.519751 containerd[1464]: 2026-01-20 00:42:20.507 [INFO][4550] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b" Namespace="calico-apiserver" Pod="calico-apiserver-5fcdf9c988-cp8b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:20.560570 containerd[1464]: time="2026-01-20T00:42:20.558780327Z" level=info msg="StartContainer for \"172933225c5daef38b6e5e7ce5b7325c980281afa70f6616ef230c0577c32561\" returns successfully" Jan 20 00:42:20.620802 containerd[1464]: time="2026-01-20T00:42:20.618693024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:42:20.620802 containerd[1464]: time="2026-01-20T00:42:20.619707746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:42:20.620802 containerd[1464]: time="2026-01-20T00:42:20.619831803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:20.633692 containerd[1464]: time="2026-01-20T00:42:20.629911518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:20.673229 systemd-networkd[1401]: calif825032be80: Link UP Jan 20 00:42:20.679162 systemd-networkd[1401]: calif825032be80: Gained carrier Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.288 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--q7994-eth0 csi-node-driver- calico-system 28fdd63b-baae-4d6e-b08c-1195d95658e8 1057 0 2026-01-20 00:41:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-q7994 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif825032be80 [] [] }} ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Namespace="calico-system" Pod="csi-node-driver-q7994" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7994-" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.289 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Namespace="calico-system" Pod="csi-node-driver-q7994" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.396 [INFO][4611] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" HandleID="k8s-pod-network.63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.396 [INFO][4611] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" HandleID="k8s-pod-network.63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Workload="localhost-k8s-csi--node--driver--q7994-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0c20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-q7994", "timestamp":"2026-01-20 00:42:20.3961476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.396 [INFO][4611] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.433 [INFO][4611] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.434 [INFO][4611] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.468 [INFO][4611] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" host="localhost" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.483 [INFO][4611] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.505 [INFO][4611] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.516 [INFO][4611] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.527 [INFO][4611] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.527 [INFO][4611] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" host="localhost" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.552 [INFO][4611] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135 Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.579 [INFO][4611] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" host="localhost" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.604 [INFO][4611] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" host="localhost" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.604 [INFO][4611] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" host="localhost" Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.605 [INFO][4611] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:20.729053 containerd[1464]: 2026-01-20 00:42:20.605 [INFO][4611] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" HandleID="k8s-pod-network.63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:20.730787 containerd[1464]: 2026-01-20 00:42:20.641 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Namespace="calico-system" Pod="csi-node-driver-q7994" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7994-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q7994-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28fdd63b-baae-4d6e-b08c-1195d95658e8", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-q7994", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif825032be80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:20.730787 containerd[1464]: 2026-01-20 00:42:20.656 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Namespace="calico-system" Pod="csi-node-driver-q7994" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:20.730787 containerd[1464]: 2026-01-20 00:42:20.656 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif825032be80 ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Namespace="calico-system" Pod="csi-node-driver-q7994" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:20.730787 containerd[1464]: 2026-01-20 00:42:20.676 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Namespace="calico-system" Pod="csi-node-driver-q7994" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:20.730787 containerd[1464]: 2026-01-20 00:42:20.683 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Namespace="calico-system" Pod="csi-node-driver-q7994" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7994-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q7994-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28fdd63b-baae-4d6e-b08c-1195d95658e8", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135", Pod:"csi-node-driver-q7994", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif825032be80", MAC:"4a:e3:47:c4:4c:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:20.730787 containerd[1464]: 2026-01-20 00:42:20.718 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135" Namespace="calico-system" Pod="csi-node-driver-q7994" WorkloadEndpoint="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:20.732785 systemd[1]: Started cri-containerd-cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b.scope - libcontainer container cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b. Jan 20 00:42:20.799994 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:42:20.824241 systemd[1]: run-netns-cni\x2d15988228\x2d0c35\x2d82e4\x2df5db\x2d480b60e84054.mount: Deactivated successfully. Jan 20 00:42:20.843258 containerd[1464]: time="2026-01-20T00:42:20.839945246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:42:20.843258 containerd[1464]: time="2026-01-20T00:42:20.840033650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:42:20.843258 containerd[1464]: time="2026-01-20T00:42:20.840061739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:20.843258 containerd[1464]: time="2026-01-20T00:42:20.840197366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:20.892468 systemd-networkd[1401]: cali84ea0611d73: Gained IPv6LL Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.753 [INFO][4670] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.754 [INFO][4670] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" iface="eth0" netns="/var/run/netns/cni-9f279202-1603-a83e-9bbd-dde3fa4e6974" Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.754 [INFO][4670] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" iface="eth0" netns="/var/run/netns/cni-9f279202-1603-a83e-9bbd-dde3fa4e6974" Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.758 [INFO][4670] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" iface="eth0" netns="/var/run/netns/cni-9f279202-1603-a83e-9bbd-dde3fa4e6974" Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.758 [INFO][4670] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.759 [INFO][4670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.857 [INFO][4735] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" HandleID="k8s-pod-network.aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.857 [INFO][4735] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.858 [INFO][4735] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.867 [WARNING][4735] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" HandleID="k8s-pod-network.aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.868 [INFO][4735] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" HandleID="k8s-pod-network.aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.873 [INFO][4735] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:20.901970 containerd[1464]: 2026-01-20 00:42:20.879 [INFO][4670] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:20.902538 kubelet[2534]: E0120 00:42:20.901472 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:20.921145 containerd[1464]: time="2026-01-20T00:42:20.920911646Z" level=info msg="TearDown network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\" successfully" Jan 20 00:42:20.921874 containerd[1464]: time="2026-01-20T00:42:20.921797493Z" level=info msg="StopPodSandbox for \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\" returns successfully" Jan 20 00:42:20.922698 systemd[1]: Started cri-containerd-63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135.scope - libcontainer container 63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135. Jan 20 00:42:20.932336 kubelet[2534]: I0120 00:42:20.930478 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wssqv" podStartSLOduration=44.930458774 podStartE2EDuration="44.930458774s" podCreationTimestamp="2026-01-20 00:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 00:42:20.930285781 +0000 UTC m=+48.681719224" watchObservedRunningTime="2026-01-20 00:42:20.930458774 +0000 UTC m=+48.681892176" Jan 20 00:42:20.931781 systemd[1]: run-netns-cni\x2d9f279202\x2d1603\x2da83e\x2d9bbd\x2ddde3fa4e6974.mount: Deactivated successfully. Jan 20 00:42:20.939665 containerd[1464]: time="2026-01-20T00:42:20.939549577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c8bcc,Uid:16f84c69-9c06-4195-a968-2c29cf809ca6,Namespace:calico-system,Attempt:1,}" Jan 20 00:42:20.940856 containerd[1464]: time="2026-01-20T00:42:20.940823810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fcdf9c988-cp8b5,Uid:183d032b-800c-4fa4-8adf-a6b6125809b8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b\"" Jan 20 00:42:20.943273 kubelet[2534]: E0120 00:42:20.943089 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" podUID="a7b5c492-f30c-4416-b600-12afd3fc29bc" Jan 20 00:42:20.945706 containerd[1464]: time="2026-01-20T00:42:20.945672316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:20.957261 kubelet[2534]: E0120 00:42:20.957156 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:42:21.030290 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:42:21.048234 containerd[1464]: time="2026-01-20T00:42:21.048033329Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:21.050514 containerd[1464]: time="2026-01-20T00:42:21.050256846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:21.050514 containerd[1464]: time="2026-01-20T00:42:21.050457170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:42:21.051820 kubelet[2534]: E0120 00:42:21.051634 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:21.052763 kubelet[2534]: E0120 00:42:21.052446 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:21.052763 kubelet[2534]: E0120 00:42:21.052730 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcdf9c988-cp8b5_calico-apiserver(183d032b-800c-4fa4-8adf-a6b6125809b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:21.052896 kubelet[2534]: E0120 00:42:21.052788 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" podUID="183d032b-800c-4fa4-8adf-a6b6125809b8" Jan 20 00:42:21.086067 systemd-networkd[1401]: cali707c6aeb6df: Gained IPv6LL Jan 20 00:42:21.099867 containerd[1464]: time="2026-01-20T00:42:21.099711540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q7994,Uid:28fdd63b-baae-4d6e-b08c-1195d95658e8,Namespace:calico-system,Attempt:1,} returns sandbox id \"63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135\"" Jan 20 00:42:21.110835 containerd[1464]: time="2026-01-20T00:42:21.110794279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:42:21.208278 containerd[1464]: time="2026-01-20T00:42:21.206791168Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:21.209549 containerd[1464]: time="2026-01-20T00:42:21.209448976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:42:21.209789 containerd[1464]: time="2026-01-20T00:42:21.209456570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:42:21.210916 kubelet[2534]: E0120 00:42:21.210189 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:21.210916 kubelet[2534]: E0120 00:42:21.210264 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:21.210916 kubelet[2534]: E0120 00:42:21.210507 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-q7994_calico-system(28fdd63b-baae-4d6e-b08c-1195d95658e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:21.212181 containerd[1464]: time="2026-01-20T00:42:21.212128833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:42:21.303994 containerd[1464]: time="2026-01-20T00:42:21.303862124Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:21.318033 containerd[1464]: time="2026-01-20T00:42:21.317845045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:42:21.318780 containerd[1464]: time="2026-01-20T00:42:21.318122105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:42:21.319535 kubelet[2534]: E0120 00:42:21.319263 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:21.319783 kubelet[2534]: E0120 00:42:21.319734 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:21.320495 kubelet[2534]: E0120 00:42:21.320151 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-q7994_calico-system(28fdd63b-baae-4d6e-b08c-1195d95658e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:21.320495 kubelet[2534]: E0120 00:42:21.320356 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:42:21.366810 systemd-networkd[1401]: cali33c83a9870a: Link UP Jan 20 00:42:21.368771 systemd-networkd[1401]: cali33c83a9870a: Gained carrier Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.167 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--c8bcc-eth0 goldmane-7c778bb748- calico-system 16f84c69-9c06-4195-a968-2c29cf809ca6 1078 0 2026-01-20 00:41:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-c8bcc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali33c83a9870a [] [] }} ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Namespace="calico-system" Pod="goldmane-7c778bb748-c8bcc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c8bcc-" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.168 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Namespace="calico-system" Pod="goldmane-7c778bb748-c8bcc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.264 [INFO][4810] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" HandleID="k8s-pod-network.c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.264 [INFO][4810] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" HandleID="k8s-pod-network.c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333d60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-c8bcc", "timestamp":"2026-01-20 00:42:21.264662959 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.264 [INFO][4810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.265 [INFO][4810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.265 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.276 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" host="localhost" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.289 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.300 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.304 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.309 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.309 [INFO][4810] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" host="localhost" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.312 [INFO][4810] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7 Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.319 [INFO][4810] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" host="localhost" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.339 [INFO][4810] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" host="localhost" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.343 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" host="localhost" Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.343 [INFO][4810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:21.406855 containerd[1464]: 2026-01-20 00:42:21.343 [INFO][4810] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" HandleID="k8s-pod-network.c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:21.408119 containerd[1464]: 2026-01-20 00:42:21.355 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Namespace="calico-system" Pod="goldmane-7c778bb748-c8bcc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--c8bcc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"16f84c69-9c06-4195-a968-2c29cf809ca6", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-c8bcc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33c83a9870a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:21.408119 containerd[1464]: 2026-01-20 00:42:21.355 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Namespace="calico-system" Pod="goldmane-7c778bb748-c8bcc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:21.408119 containerd[1464]: 2026-01-20 00:42:21.355 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33c83a9870a ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Namespace="calico-system" Pod="goldmane-7c778bb748-c8bcc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:21.408119 containerd[1464]: 2026-01-20 00:42:21.368 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Namespace="calico-system" Pod="goldmane-7c778bb748-c8bcc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:21.408119 containerd[1464]: 2026-01-20 00:42:21.371 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Namespace="calico-system" Pod="goldmane-7c778bb748-c8bcc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--c8bcc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"16f84c69-9c06-4195-a968-2c29cf809ca6", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7", Pod:"goldmane-7c778bb748-c8bcc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33c83a9870a", MAC:"be:c9:2d:5f:f0:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:21.408119 containerd[1464]: 2026-01-20 00:42:21.399 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7" Namespace="calico-system" Pod="goldmane-7c778bb748-c8bcc" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:21.473294 containerd[1464]: time="2026-01-20T00:42:21.471646051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 00:42:21.473294 containerd[1464]: time="2026-01-20T00:42:21.471715293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 00:42:21.473294 containerd[1464]: time="2026-01-20T00:42:21.471831084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:21.474024 containerd[1464]: time="2026-01-20T00:42:21.473810224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 00:42:21.512093 systemd[1]: Started cri-containerd-c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7.scope - libcontainer container c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7. Jan 20 00:42:21.534264 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 00:42:21.583650 containerd[1464]: time="2026-01-20T00:42:21.583493238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c8bcc,Uid:16f84c69-9c06-4195-a968-2c29cf809ca6,Namespace:calico-system,Attempt:1,} returns sandbox id \"c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7\"" Jan 20 00:42:21.588329 containerd[1464]: time="2026-01-20T00:42:21.588297270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:42:21.654305 containerd[1464]: time="2026-01-20T00:42:21.654151589Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:21.657247 containerd[1464]: time="2026-01-20T00:42:21.657143667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:42:21.657692 containerd[1464]: time="2026-01-20T00:42:21.657294712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:42:21.657930 kubelet[2534]: E0120 00:42:21.657702 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:21.658010 kubelet[2534]: E0120 00:42:21.657931 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:21.658146 kubelet[2534]: E0120 00:42:21.658033 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c8bcc_calico-system(16f84c69-9c06-4195-a968-2c29cf809ca6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:21.658146 kubelet[2534]: E0120 00:42:21.658125 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:42:21.659961 systemd-networkd[1401]: calid3ead960850: Gained IPv6LL Jan 20 00:42:21.852027 systemd-networkd[1401]: calif825032be80: Gained IPv6LL Jan 20 00:42:21.962009 kubelet[2534]: E0120 00:42:21.961472 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" podUID="183d032b-800c-4fa4-8adf-a6b6125809b8" Jan 20 00:42:21.966186 kubelet[2534]: E0120 00:42:21.965762 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:42:21.967919 kubelet[2534]: E0120 00:42:21.967229 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:21.969978 kubelet[2534]: E0120 00:42:21.969879 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:42:21.970223 kubelet[2534]: E0120 00:42:21.969921 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:42:21.977047 kubelet[2534]: E0120 00:42:21.976066 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" podUID="a7b5c492-f30c-4416-b600-12afd3fc29bc" Jan 20 00:42:22.363678 systemd-networkd[1401]: cali2c263026a81: Gained IPv6LL Jan 20 00:42:22.747943 systemd-networkd[1401]: cali33c83a9870a: Gained IPv6LL Jan 20 00:42:22.753195 kubelet[2534]: I0120 00:42:22.753107 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 00:42:22.753973 kubelet[2534]: E0120 00:42:22.753886 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:22.972684 kubelet[2534]: E0120 00:42:22.972599 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" podUID="183d032b-800c-4fa4-8adf-a6b6125809b8" Jan 20 00:42:22.973985 kubelet[2534]: E0120 00:42:22.973345 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:42:22.973985 kubelet[2534]: E0120 00:42:22.973890 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:42:23.034624 kubelet[2534]: E0120 00:42:23.032758 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:28.491469 containerd[1464]: time="2026-01-20T00:42:28.491248841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:42:28.577523 containerd[1464]: time="2026-01-20T00:42:28.577325754Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:28.580981 containerd[1464]: time="2026-01-20T00:42:28.580723042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:42:28.580981 containerd[1464]: time="2026-01-20T00:42:28.580939470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:42:28.581637 kubelet[2534]: E0120 00:42:28.581440 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:28.581637 kubelet[2534]: E0120 00:42:28.581529 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:28.582818 kubelet[2534]: E0120 00:42:28.581683 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7dcfd46877-pcvss_calico-system(8c80f384-368b-4ca9-94db-bbaff86a3f79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:28.583619 containerd[1464]: time="2026-01-20T00:42:28.583279427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:42:28.652116 containerd[1464]: time="2026-01-20T00:42:28.652010842Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:28.654993 containerd[1464]: time="2026-01-20T00:42:28.654867184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:42:28.655070 containerd[1464]: time="2026-01-20T00:42:28.654961643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:42:28.655492 kubelet[2534]: E0120 00:42:28.655248 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:28.655492 kubelet[2534]: E0120 00:42:28.655341 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:28.656135 kubelet[2534]: E0120 00:42:28.655526 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7dcfd46877-pcvss_calico-system(8c80f384-368b-4ca9-94db-bbaff86a3f79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:28.656135 kubelet[2534]: E0120 00:42:28.655569 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dcfd46877-pcvss" podUID="8c80f384-368b-4ca9-94db-bbaff86a3f79" Jan 20 00:42:32.407388 containerd[1464]: time="2026-01-20T00:42:32.407251452Z" level=info msg="StopPodSandbox for \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\"" Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.482 [WARNING][4940] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0", GenerateName:"calico-apiserver-5fcdf9c988-", Namespace:"calico-apiserver", SelfLink:"", UID:"183d032b-800c-4fa4-8adf-a6b6125809b8", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdf9c988", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b", Pod:"calico-apiserver-5fcdf9c988-cp8b5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c263026a81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.482 [INFO][4940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.482 [INFO][4940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" iface="eth0" netns="" Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.482 [INFO][4940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.482 [INFO][4940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.538 [INFO][4949] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" HandleID="k8s-pod-network.14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.539 [INFO][4949] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.539 [INFO][4949] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.551 [WARNING][4949] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" HandleID="k8s-pod-network.14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.551 [INFO][4949] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" HandleID="k8s-pod-network.14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.554 [INFO][4949] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:32.563243 containerd[1464]: 2026-01-20 00:42:32.559 [INFO][4940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:32.564546 containerd[1464]: time="2026-01-20T00:42:32.563519319Z" level=info msg="TearDown network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\" successfully" Jan 20 00:42:32.564546 containerd[1464]: time="2026-01-20T00:42:32.563545757Z" level=info msg="StopPodSandbox for \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\" returns successfully" Jan 20 00:42:32.564546 containerd[1464]: time="2026-01-20T00:42:32.564522982Z" level=info msg="RemovePodSandbox for \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\"" Jan 20 00:42:32.567849 containerd[1464]: time="2026-01-20T00:42:32.567781959Z" level=info msg="Forcibly stopping sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\"" Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.649 [WARNING][4970] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0", GenerateName:"calico-apiserver-5fcdf9c988-", Namespace:"calico-apiserver", SelfLink:"", UID:"183d032b-800c-4fa4-8adf-a6b6125809b8", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdf9c988", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cb263014f039e1987c958e5f54279f09aab1d3af0fac7fd7839f49d9ae38a70b", Pod:"calico-apiserver-5fcdf9c988-cp8b5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2c263026a81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.650 [INFO][4970] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.650 [INFO][4970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" iface="eth0" netns="" Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.650 [INFO][4970] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.650 [INFO][4970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.694 [INFO][4978] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" HandleID="k8s-pod-network.14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.695 [INFO][4978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.695 [INFO][4978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.702 [WARNING][4978] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" HandleID="k8s-pod-network.14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.702 [INFO][4978] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" HandleID="k8s-pod-network.14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--cp8b5-eth0" Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.705 [INFO][4978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:32.710302 containerd[1464]: 2026-01-20 00:42:32.707 [INFO][4970] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf" Jan 20 00:42:32.710302 containerd[1464]: time="2026-01-20T00:42:32.710265260Z" level=info msg="TearDown network for sandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\" successfully" Jan 20 00:42:32.731234 containerd[1464]: time="2026-01-20T00:42:32.730973675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:42:32.731234 containerd[1464]: time="2026-01-20T00:42:32.731045835Z" level=info msg="RemovePodSandbox \"14889157febf8f344bf64472db91b9efbc21e59507c1a1faebeebbe667a6b2cf\" returns successfully" Jan 20 00:42:32.732075 containerd[1464]: time="2026-01-20T00:42:32.731945891Z" level=info msg="StopPodSandbox for \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\"" Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.802 [WARNING][4996] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q7994-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28fdd63b-baae-4d6e-b08c-1195d95658e8", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135", Pod:"csi-node-driver-q7994", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif825032be80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.802 [INFO][4996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.802 [INFO][4996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" iface="eth0" netns="" Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.802 [INFO][4996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.802 [INFO][4996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.847 [INFO][5004] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" HandleID="k8s-pod-network.129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.847 [INFO][5004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.847 [INFO][5004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.858 [WARNING][5004] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" HandleID="k8s-pod-network.129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.858 [INFO][5004] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" HandleID="k8s-pod-network.129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.861 [INFO][5004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:32.870159 containerd[1464]: 2026-01-20 00:42:32.865 [INFO][4996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:32.870159 containerd[1464]: time="2026-01-20T00:42:32.870005286Z" level=info msg="TearDown network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\" successfully" Jan 20 00:42:32.870159 containerd[1464]: time="2026-01-20T00:42:32.870038135Z" level=info msg="StopPodSandbox for \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\" returns successfully" Jan 20 00:42:32.871721 containerd[1464]: time="2026-01-20T00:42:32.871481679Z" level=info msg="RemovePodSandbox for \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\"" Jan 20 00:42:32.871721 containerd[1464]: time="2026-01-20T00:42:32.871546026Z" level=info msg="Forcibly stopping sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\"" Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.947 [WARNING][5021] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q7994-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"28fdd63b-baae-4d6e-b08c-1195d95658e8", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63bdd303bc40ef540e7649822ce15da8dcf07ee545bb89ecd055747c078b4135", Pod:"csi-node-driver-q7994", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif825032be80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.948 [INFO][5021] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.948 [INFO][5021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" iface="eth0" netns="" Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.948 [INFO][5021] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.948 [INFO][5021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.980 [INFO][5031] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" HandleID="k8s-pod-network.129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.980 [INFO][5031] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.980 [INFO][5031] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.994 [WARNING][5031] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" HandleID="k8s-pod-network.129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.994 [INFO][5031] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" HandleID="k8s-pod-network.129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Workload="localhost-k8s-csi--node--driver--q7994-eth0" Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:32.997 [INFO][5031] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:33.005796 containerd[1464]: 2026-01-20 00:42:33.000 [INFO][5021] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937" Jan 20 00:42:33.005796 containerd[1464]: time="2026-01-20T00:42:33.005576473Z" level=info msg="TearDown network for sandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\" successfully" Jan 20 00:42:33.013485 containerd[1464]: time="2026-01-20T00:42:33.013206141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:42:33.013485 containerd[1464]: time="2026-01-20T00:42:33.013274666Z" level=info msg="RemovePodSandbox \"129f5196ffb2cd0d643aabc0ee01c7671592cfa6a0b72258f80e8854746c3937\" returns successfully" Jan 20 00:42:33.014179 containerd[1464]: time="2026-01-20T00:42:33.014124916Z" level=info msg="StopPodSandbox for \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\"" Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.072 [WARNING][5049] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" WorkloadEndpoint="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.073 [INFO][5049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.073 [INFO][5049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" iface="eth0" netns="" Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.073 [INFO][5049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.073 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.161 [INFO][5057] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" HandleID="k8s-pod-network.312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Workload="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.162 [INFO][5057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.162 [INFO][5057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.180 [WARNING][5057] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" HandleID="k8s-pod-network.312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Workload="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.180 [INFO][5057] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" HandleID="k8s-pod-network.312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Workload="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.184 [INFO][5057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:33.190975 containerd[1464]: 2026-01-20 00:42:33.187 [INFO][5049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:33.190975 containerd[1464]: time="2026-01-20T00:42:33.190912607Z" level=info msg="TearDown network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\" successfully" Jan 20 00:42:33.190975 containerd[1464]: time="2026-01-20T00:42:33.190953691Z" level=info msg="StopPodSandbox for \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\" returns successfully" Jan 20 00:42:33.192239 containerd[1464]: time="2026-01-20T00:42:33.191977244Z" level=info msg="RemovePodSandbox for \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\"" Jan 20 00:42:33.192239 containerd[1464]: time="2026-01-20T00:42:33.192013389Z" level=info msg="Forcibly stopping sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\"" Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.254 [WARNING][5075] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" WorkloadEndpoint="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.255 [INFO][5075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.255 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" iface="eth0" netns="" Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.255 [INFO][5075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.255 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.310 [INFO][5085] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" HandleID="k8s-pod-network.312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Workload="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.310 [INFO][5085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.310 [INFO][5085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.319 [WARNING][5085] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" HandleID="k8s-pod-network.312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Workload="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.319 [INFO][5085] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" HandleID="k8s-pod-network.312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Workload="localhost-k8s-whisker--59d4dcb77c--t6w5v-eth0" Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.322 [INFO][5085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:33.328507 containerd[1464]: 2026-01-20 00:42:33.325 [INFO][5075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9" Jan 20 00:42:33.331167 containerd[1464]: time="2026-01-20T00:42:33.328972541Z" level=info msg="TearDown network for sandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\" successfully" Jan 20 00:42:33.334529 containerd[1464]: time="2026-01-20T00:42:33.334495860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:42:33.334791 containerd[1464]: time="2026-01-20T00:42:33.334707064Z" level=info msg="RemovePodSandbox \"312f18b0f47f4983c4ea19164b3cf3c44555c119bf4df2915b7ef91c0fffd4e9\" returns successfully" Jan 20 00:42:33.335558 containerd[1464]: time="2026-01-20T00:42:33.335338359Z" level=info msg="StopPodSandbox for \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\"" Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.408 [WARNING][5102] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wssqv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a912c1f3-a959-479f-b8c4-402f78743287", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4", Pod:"coredns-66bc5c9577-wssqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3ead960850", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.409 [INFO][5102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.409 [INFO][5102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" iface="eth0" netns="" Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.409 [INFO][5102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.409 [INFO][5102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.450 [INFO][5111] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" HandleID="k8s-pod-network.705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.450 [INFO][5111] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.451 [INFO][5111] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.464 [WARNING][5111] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" HandleID="k8s-pod-network.705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.464 [INFO][5111] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" HandleID="k8s-pod-network.705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.469 [INFO][5111] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:33.475790 containerd[1464]: 2026-01-20 00:42:33.472 [INFO][5102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:33.478639 containerd[1464]: time="2026-01-20T00:42:33.475844908Z" level=info msg="TearDown network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\" successfully" Jan 20 00:42:33.478639 containerd[1464]: time="2026-01-20T00:42:33.475879982Z" level=info msg="StopPodSandbox for \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\" returns successfully" Jan 20 00:42:33.478639 containerd[1464]: time="2026-01-20T00:42:33.476656475Z" level=info msg="RemovePodSandbox for \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\"" Jan 20 00:42:33.478639 containerd[1464]: time="2026-01-20T00:42:33.476690436Z" level=info msg="Forcibly stopping sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\"" Jan 20 00:42:33.492044 containerd[1464]: time="2026-01-20T00:42:33.491983693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:33.575866 containerd[1464]: time="2026-01-20T00:42:33.575670701Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:33.578438 containerd[1464]: time="2026-01-20T00:42:33.578275571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:33.580226 containerd[1464]: time="2026-01-20T00:42:33.578440180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:42:33.580545 kubelet[2534]: E0120 00:42:33.579574 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:33.580545 kubelet[2534]: E0120 00:42:33.579704 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:33.580545 kubelet[2534]: E0120 00:42:33.579807 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcdf9c988-cp8b5_calico-apiserver(183d032b-800c-4fa4-8adf-a6b6125809b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:33.580545 kubelet[2534]: E0120 00:42:33.579851 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" podUID="183d032b-800c-4fa4-8adf-a6b6125809b8" Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.531 [WARNING][5129] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wssqv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a912c1f3-a959-479f-b8c4-402f78743287", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f1bba7c436bcb9d60d1fde9812966ef486386c43e77405e5e81bf6f644855d4", Pod:"coredns-66bc5c9577-wssqv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3ead960850", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.531 [INFO][5129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.531 [INFO][5129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" iface="eth0" netns="" Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.531 [INFO][5129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.531 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.604 [INFO][5138] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" HandleID="k8s-pod-network.705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.605 [INFO][5138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.605 [INFO][5138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.614 [WARNING][5138] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" HandleID="k8s-pod-network.705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.614 [INFO][5138] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" HandleID="k8s-pod-network.705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Workload="localhost-k8s-coredns--66bc5c9577--wssqv-eth0" Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.617 [INFO][5138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:33.634806 containerd[1464]: 2026-01-20 00:42:33.626 [INFO][5129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e" Jan 20 00:42:33.634806 containerd[1464]: time="2026-01-20T00:42:33.632692023Z" level=info msg="TearDown network for sandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\" successfully" Jan 20 00:42:33.640034 containerd[1464]: time="2026-01-20T00:42:33.639947946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:42:33.640142 containerd[1464]: time="2026-01-20T00:42:33.640124457Z" level=info msg="RemovePodSandbox \"705cdb8a9c897c7d299e225e76eefc73eb3352231d5f5fc948aff1b0fb5ae48e\" returns successfully" Jan 20 00:42:33.640839 containerd[1464]: time="2026-01-20T00:42:33.640794170Z" level=info msg="StopPodSandbox for \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\"" Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.696 [WARNING][5154] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0", GenerateName:"calico-kube-controllers-6b77768bf7-", Namespace:"calico-system", SelfLink:"", UID:"a7b5c492-f30c-4416-b600-12afd3fc29bc", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b77768bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23", Pod:"calico-kube-controllers-6b77768bf7-8nrk9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali707c6aeb6df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.696 [INFO][5154] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.696 [INFO][5154] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" iface="eth0" netns="" Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.696 [INFO][5154] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.696 [INFO][5154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.739 [INFO][5162] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" HandleID="k8s-pod-network.9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.739 [INFO][5162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.739 [INFO][5162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.750 [WARNING][5162] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" HandleID="k8s-pod-network.9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.750 [INFO][5162] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" HandleID="k8s-pod-network.9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.754 [INFO][5162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:33.760137 containerd[1464]: 2026-01-20 00:42:33.757 [INFO][5154] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:33.760945 containerd[1464]: time="2026-01-20T00:42:33.760780472Z" level=info msg="TearDown network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\" successfully" Jan 20 00:42:33.761015 containerd[1464]: time="2026-01-20T00:42:33.760995812Z" level=info msg="StopPodSandbox for \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\" returns successfully" Jan 20 00:42:33.762169 containerd[1464]: time="2026-01-20T00:42:33.762113643Z" level=info msg="RemovePodSandbox for \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\"" Jan 20 00:42:33.762322 containerd[1464]: time="2026-01-20T00:42:33.762178079Z" level=info msg="Forcibly stopping sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\"" Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.830 [WARNING][5179] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0", GenerateName:"calico-kube-controllers-6b77768bf7-", Namespace:"calico-system", SelfLink:"", UID:"a7b5c492-f30c-4416-b600-12afd3fc29bc", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b77768bf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f363f01c2c8717f786f4ef0709fa979749f4cf99caba73a3f01b36cd28388a23", Pod:"calico-kube-controllers-6b77768bf7-8nrk9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali707c6aeb6df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.832 [INFO][5179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.832 [INFO][5179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" iface="eth0" netns="" Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.832 [INFO][5179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.833 [INFO][5179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.886 [INFO][5188] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" HandleID="k8s-pod-network.9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.887 [INFO][5188] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.887 [INFO][5188] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.894 [WARNING][5188] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" HandleID="k8s-pod-network.9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.894 [INFO][5188] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" HandleID="k8s-pod-network.9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Workload="localhost-k8s-calico--kube--controllers--6b77768bf7--8nrk9-eth0" Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.896 [INFO][5188] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:33.907520 containerd[1464]: 2026-01-20 00:42:33.901 [INFO][5179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440" Jan 20 00:42:33.907520 containerd[1464]: time="2026-01-20T00:42:33.906058944Z" level=info msg="TearDown network for sandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\" successfully" Jan 20 00:42:33.915481 containerd[1464]: time="2026-01-20T00:42:33.915308850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:42:33.915659 containerd[1464]: time="2026-01-20T00:42:33.915533617Z" level=info msg="RemovePodSandbox \"9014dac33eb273b19b46ccc1e1ac7bae16ee61d59bc50573d5454e59fbdce440\" returns successfully" Jan 20 00:42:33.916258 containerd[1464]: time="2026-01-20T00:42:33.916197841Z" level=info msg="StopPodSandbox for \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\"" Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:33.971 [WARNING][5204] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0", GenerateName:"calico-apiserver-5fcdf9c988-", Namespace:"calico-apiserver", SelfLink:"", UID:"94df424b-d767-451c-98d9-6b195890f32a", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdf9c988", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69", Pod:"calico-apiserver-5fcdf9c988-97g72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84ea0611d73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:33.972 [INFO][5204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:33.972 [INFO][5204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" iface="eth0" netns="" Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:33.972 [INFO][5204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:33.972 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:34.010 [INFO][5213] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" HandleID="k8s-pod-network.d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:34.010 [INFO][5213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:34.010 [INFO][5213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:34.017 [WARNING][5213] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" HandleID="k8s-pod-network.d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:34.018 [INFO][5213] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" HandleID="k8s-pod-network.d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:34.021 [INFO][5213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:34.032964 containerd[1464]: 2026-01-20 00:42:34.026 [INFO][5204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:34.033897 containerd[1464]: time="2026-01-20T00:42:34.033072183Z" level=info msg="TearDown network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\" successfully" Jan 20 00:42:34.033897 containerd[1464]: time="2026-01-20T00:42:34.033096938Z" level=info msg="StopPodSandbox for \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\" returns successfully" Jan 20 00:42:34.034213 containerd[1464]: time="2026-01-20T00:42:34.034080873Z" level=info msg="RemovePodSandbox for \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\"" Jan 20 00:42:34.034213 containerd[1464]: time="2026-01-20T00:42:34.034161160Z" level=info msg="Forcibly stopping sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\"" Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.089 [WARNING][5229] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0", GenerateName:"calico-apiserver-5fcdf9c988-", Namespace:"calico-apiserver", SelfLink:"", UID:"94df424b-d767-451c-98d9-6b195890f32a", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fcdf9c988", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"598324bdb25fe2d23ac054ed71c59aacc16640181651d6f80b7cab66f250ec69", Pod:"calico-apiserver-5fcdf9c988-97g72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali84ea0611d73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.089 [INFO][5229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.089 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" iface="eth0" netns="" Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.089 [INFO][5229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.089 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.132 [INFO][5237] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" HandleID="k8s-pod-network.d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.132 [INFO][5237] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.132 [INFO][5237] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.142 [WARNING][5237] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" HandleID="k8s-pod-network.d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.142 [INFO][5237] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" HandleID="k8s-pod-network.d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Workload="localhost-k8s-calico--apiserver--5fcdf9c988--97g72-eth0" Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.146 [INFO][5237] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:34.155638 containerd[1464]: 2026-01-20 00:42:34.151 [INFO][5229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca" Jan 20 00:42:34.155638 containerd[1464]: time="2026-01-20T00:42:34.155489633Z" level=info msg="TearDown network for sandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\" successfully" Jan 20 00:42:34.165975 containerd[1464]: time="2026-01-20T00:42:34.164146398Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:42:34.165975 containerd[1464]: time="2026-01-20T00:42:34.164221245Z" level=info msg="RemovePodSandbox \"d8ca29653ceb0c45c9db3be724c7de8a8cf2445c9f2a84373ade33ed2e641eca\" returns successfully" Jan 20 00:42:34.165975 containerd[1464]: time="2026-01-20T00:42:34.165313909Z" level=info msg="StopPodSandbox for \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\"" Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.214 [WARNING][5255] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--c474g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"601adc65-0461-4784-a3c3-b551c4a085b3", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613", Pod:"coredns-66bc5c9577-c474g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b0fc346547", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.215 [INFO][5255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.215 [INFO][5255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" iface="eth0" netns="" Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.215 [INFO][5255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.215 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.250 [INFO][5264] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" HandleID="k8s-pod-network.3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.251 [INFO][5264] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.251 [INFO][5264] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.261 [WARNING][5264] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" HandleID="k8s-pod-network.3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.261 [INFO][5264] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" HandleID="k8s-pod-network.3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.264 [INFO][5264] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:34.270694 containerd[1464]: 2026-01-20 00:42:34.267 [INFO][5255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:34.270694 containerd[1464]: time="2026-01-20T00:42:34.270654236Z" level=info msg="TearDown network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\" successfully" Jan 20 00:42:34.270694 containerd[1464]: time="2026-01-20T00:42:34.270679942Z" level=info msg="StopPodSandbox for \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\" returns successfully" Jan 20 00:42:34.273491 containerd[1464]: time="2026-01-20T00:42:34.271245873Z" level=info msg="RemovePodSandbox for \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\"" Jan 20 00:42:34.273491 containerd[1464]: time="2026-01-20T00:42:34.271270808Z" level=info msg="Forcibly stopping sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\"" Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.327 [WARNING][5282] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--c474g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"601adc65-0461-4784-a3c3-b551c4a085b3", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7495a94e1b84868fd0d44a0edcb05de541c21337a562e5c45eb4aba309aef613", Pod:"coredns-66bc5c9577-c474g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b0fc346547", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.328 [INFO][5282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.328 [INFO][5282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" iface="eth0" netns="" Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.328 [INFO][5282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.328 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.372 [INFO][5291] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" HandleID="k8s-pod-network.3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.373 [INFO][5291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.373 [INFO][5291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.384 [WARNING][5291] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" HandleID="k8s-pod-network.3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.384 [INFO][5291] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" HandleID="k8s-pod-network.3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Workload="localhost-k8s-coredns--66bc5c9577--c474g-eth0" Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.387 [INFO][5291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:34.393856 containerd[1464]: 2026-01-20 00:42:34.390 [INFO][5282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12" Jan 20 00:42:34.394717 containerd[1464]: time="2026-01-20T00:42:34.393855622Z" level=info msg="TearDown network for sandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\" successfully" Jan 20 00:42:34.399779 containerd[1464]: time="2026-01-20T00:42:34.399674492Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:42:34.399779 containerd[1464]: time="2026-01-20T00:42:34.399726476Z" level=info msg="RemovePodSandbox \"3f816ff30dc2854b2f54b3a891cddd51035402faf79fe0497bde15fad7ef4f12\" returns successfully" Jan 20 00:42:34.400799 containerd[1464]: time="2026-01-20T00:42:34.400649166Z" level=info msg="StopPodSandbox for \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\"" Jan 20 00:42:34.437702 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:49218.service - OpenSSH per-connection server daemon (10.0.0.1:49218). Jan 20 00:42:34.500087 containerd[1464]: time="2026-01-20T00:42:34.500048233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.470 [WARNING][5310] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--c8bcc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"16f84c69-9c06-4195-a968-2c29cf809ca6", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7", Pod:"goldmane-7c778bb748-c8bcc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33c83a9870a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.470 [INFO][5310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.470 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" iface="eth0" netns="" Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.470 [INFO][5310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.470 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.512 [INFO][5321] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" HandleID="k8s-pod-network.aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.512 [INFO][5321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.513 [INFO][5321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.519 [WARNING][5321] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" HandleID="k8s-pod-network.aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.519 [INFO][5321] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" HandleID="k8s-pod-network.aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.522 [INFO][5321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:34.530837 containerd[1464]: 2026-01-20 00:42:34.526 [INFO][5310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:34.530837 containerd[1464]: time="2026-01-20T00:42:34.530726204Z" level=info msg="TearDown network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\" successfully" Jan 20 00:42:34.530837 containerd[1464]: time="2026-01-20T00:42:34.530760837Z" level=info msg="StopPodSandbox for \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\" returns successfully" Jan 20 00:42:34.532096 containerd[1464]: time="2026-01-20T00:42:34.531964758Z" level=info msg="RemovePodSandbox for \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\"" Jan 20 00:42:34.532096 containerd[1464]: time="2026-01-20T00:42:34.531998469Z" level=info msg="Forcibly stopping sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\"" Jan 20 00:42:34.539880 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 49218 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:42:34.542333 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:42:34.561675 systemd-logind[1442]: New session 8 of user core. Jan 20 00:42:34.568064 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 00:42:34.588150 containerd[1464]: time="2026-01-20T00:42:34.588047846Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:34.595280 containerd[1464]: time="2026-01-20T00:42:34.595096705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:42:34.596833 containerd[1464]: time="2026-01-20T00:42:34.596619356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:42:34.598291 kubelet[2534]: E0120 00:42:34.597336 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:34.598291 kubelet[2534]: E0120 00:42:34.597456 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:42:34.598291 kubelet[2534]: E0120 00:42:34.597597 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c8bcc_calico-system(16f84c69-9c06-4195-a968-2c29cf809ca6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:34.598291 kubelet[2534]: E0120 00:42:34.597639 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.664 [WARNING][5339] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--c8bcc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"16f84c69-9c06-4195-a968-2c29cf809ca6", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 0, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0405d4b6e17f16125a82d48474b361568400ec075e55250ac73b0c21bc6bfb7", Pod:"goldmane-7c778bb748-c8bcc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali33c83a9870a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.666 [INFO][5339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.666 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" iface="eth0" netns="" Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.666 [INFO][5339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.666 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.750 [INFO][5356] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" HandleID="k8s-pod-network.aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.750 [INFO][5356] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.750 [INFO][5356] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.773 [WARNING][5356] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" HandleID="k8s-pod-network.aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.774 [INFO][5356] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" HandleID="k8s-pod-network.aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Workload="localhost-k8s-goldmane--7c778bb748--c8bcc-eth0" Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.777 [INFO][5356] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 00:42:34.793769 containerd[1464]: 2026-01-20 00:42:34.786 [INFO][5339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e" Jan 20 00:42:34.793769 containerd[1464]: time="2026-01-20T00:42:34.794645227Z" level=info msg="TearDown network for sandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\" successfully" Jan 20 00:42:34.815313 containerd[1464]: time="2026-01-20T00:42:34.814620873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 00:42:34.815313 containerd[1464]: time="2026-01-20T00:42:34.814701170Z" level=info msg="RemovePodSandbox \"aba6152b7de14f00c800ab2fa757282559c1bd1240db2244e8565bfe29b2238e\" returns successfully" Jan 20 00:42:34.920152 sshd[5317]: pam_unix(sshd:session): session closed for user core Jan 20 00:42:34.925638 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:49218.service: Deactivated successfully. Jan 20 00:42:34.930155 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 00:42:34.931797 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Jan 20 00:42:34.934079 systemd-logind[1442]: Removed session 8. Jan 20 00:42:35.492557 containerd[1464]: time="2026-01-20T00:42:35.491994622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:35.563631 containerd[1464]: time="2026-01-20T00:42:35.563539906Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:35.565631 containerd[1464]: time="2026-01-20T00:42:35.565361801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:35.565955 containerd[1464]: time="2026-01-20T00:42:35.565613941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:42:35.566150 kubelet[2534]: E0120 00:42:35.565899 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:35.566150 kubelet[2534]: E0120 00:42:35.565960 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:35.566313 kubelet[2534]: E0120 00:42:35.566196 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcdf9c988-97g72_calico-apiserver(94df424b-d767-451c-98d9-6b195890f32a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:35.566313 kubelet[2534]: E0120 00:42:35.566244 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:42:35.567753 containerd[1464]: time="2026-01-20T00:42:35.567662227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:42:35.630222 containerd[1464]: time="2026-01-20T00:42:35.630144301Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:35.632122 containerd[1464]: time="2026-01-20T00:42:35.632040750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:42:35.632262 containerd[1464]: time="2026-01-20T00:42:35.632163753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:42:35.632687 kubelet[2534]: E0120 00:42:35.632620 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:35.633014 kubelet[2534]: E0120 00:42:35.632705 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:42:35.633014 kubelet[2534]: E0120 00:42:35.632819 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b77768bf7-8nrk9_calico-system(a7b5c492-f30c-4416-b600-12afd3fc29bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:35.633014 kubelet[2534]: E0120 00:42:35.632869 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" podUID="a7b5c492-f30c-4416-b600-12afd3fc29bc" Jan 20 00:42:37.497591 containerd[1464]: time="2026-01-20T00:42:37.497323942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:42:37.573230 containerd[1464]: time="2026-01-20T00:42:37.573056563Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:37.574826 containerd[1464]: time="2026-01-20T00:42:37.574692693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:42:37.574937 containerd[1464]: time="2026-01-20T00:42:37.574834112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:42:37.575342 kubelet[2534]: E0120 00:42:37.575198 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:37.575342 kubelet[2534]: E0120 00:42:37.575317 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:42:37.575991 kubelet[2534]: E0120 00:42:37.575584 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-q7994_calico-system(28fdd63b-baae-4d6e-b08c-1195d95658e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:37.577310 containerd[1464]: time="2026-01-20T00:42:37.577267490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:42:37.641892 containerd[1464]: time="2026-01-20T00:42:37.641615017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:37.651083 containerd[1464]: time="2026-01-20T00:42:37.650773555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:42:37.651083 containerd[1464]: time="2026-01-20T00:42:37.650955218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:42:37.651610 kubelet[2534]: E0120 00:42:37.651262 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:37.651610 kubelet[2534]: E0120 00:42:37.651526 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:42:37.651795 kubelet[2534]: E0120 00:42:37.651636 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-q7994_calico-system(28fdd63b-baae-4d6e-b08c-1195d95658e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:37.651795 kubelet[2534]: E0120 00:42:37.651678 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:42:39.943566 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:49232.service - OpenSSH per-connection server daemon (10.0.0.1:49232). Jan 20 00:42:39.995862 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 49232 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:42:39.998305 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:42:40.008012 systemd-logind[1442]: New session 9 of user core. Jan 20 00:42:40.012742 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 00:42:40.219606 sshd[5384]: pam_unix(sshd:session): session closed for user core Jan 20 00:42:40.225978 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:49232.service: Deactivated successfully. Jan 20 00:42:40.229186 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 00:42:40.232809 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Jan 20 00:42:40.235139 systemd-logind[1442]: Removed session 9. Jan 20 00:42:41.505242 kubelet[2534]: E0120 00:42:41.503976 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dcfd46877-pcvss" podUID="8c80f384-368b-4ca9-94db-bbaff86a3f79" Jan 20 00:42:45.239480 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:41674.service - OpenSSH per-connection server daemon (10.0.0.1:41674). Jan 20 00:42:45.300660 sshd[5402]: Accepted publickey for core from 10.0.0.1 port 41674 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:42:45.303647 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:42:45.310995 systemd-logind[1442]: New session 10 of user core. Jan 20 00:42:45.320745 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 00:42:45.568042 sshd[5402]: pam_unix(sshd:session): session closed for user core Jan 20 00:42:45.580888 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:41674.service: Deactivated successfully. Jan 20 00:42:45.585086 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 00:42:45.588892 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Jan 20 00:42:45.596615 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:41682.service - OpenSSH per-connection server daemon (10.0.0.1:41682). Jan 20 00:42:45.600005 systemd-logind[1442]: Removed session 10. Jan 20 00:42:45.709901 sshd[5418]: Accepted publickey for core from 10.0.0.1 port 41682 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:42:45.713703 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:42:45.724897 systemd-logind[1442]: New session 11 of user core. Jan 20 00:42:45.764707 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 00:42:48.559111 kubelet[2534]: E0120 00:42:48.558824 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:42:48.563891 kubelet[2534]: E0120 00:42:48.563594 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:42:48.565450 kubelet[2534]: E0120 00:42:48.563879 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" podUID="183d032b-800c-4fa4-8adf-a6b6125809b8" Jan 20 00:42:48.861775 sshd[5418]: pam_unix(sshd:session): session closed for user core Jan 20 00:42:48.897750 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:41682.service: Deactivated successfully. Jan 20 00:42:48.919957 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 00:42:48.920860 systemd[1]: session-11.scope: Consumed 1.584s CPU time. Jan 20 00:42:48.931999 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Jan 20 00:42:48.953702 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:41684.service - OpenSSH per-connection server daemon (10.0.0.1:41684). Jan 20 00:42:48.958655 systemd-logind[1442]: Removed session 11. Jan 20 00:42:49.015190 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 41684 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:42:49.017729 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:42:49.032448 systemd-logind[1442]: New session 12 of user core. Jan 20 00:42:49.062201 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 00:42:49.321117 sshd[5433]: pam_unix(sshd:session): session closed for user core Jan 20 00:42:49.328123 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:41684.service: Deactivated successfully. Jan 20 00:42:49.332583 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 00:42:49.335214 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Jan 20 00:42:49.341243 systemd-logind[1442]: Removed session 12. Jan 20 00:42:49.491648 kubelet[2534]: E0120 00:42:49.491540 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" podUID="a7b5c492-f30c-4416-b600-12afd3fc29bc" Jan 20 00:42:52.497093 kubelet[2534]: E0120 00:42:52.496864 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:42:53.095147 systemd[1]: run-containerd-runc-k8s.io-8fb9fb9c455d2ae6d763ddba4785dd1a74837695de16924ab70b4cf82e734a1d-runc.MRvnma.mount: Deactivated successfully. Jan 20 00:42:54.344816 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:58942.service - OpenSSH per-connection server daemon (10.0.0.1:58942). Jan 20 00:42:54.408911 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 58942 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:42:54.411568 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:42:54.425590 systemd-logind[1442]: New session 13 of user core. Jan 20 00:42:54.430743 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 00:42:54.494540 kubelet[2534]: E0120 00:42:54.491221 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:42:54.495304 containerd[1464]: time="2026-01-20T00:42:54.491461904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 00:42:54.567523 containerd[1464]: time="2026-01-20T00:42:54.567184081Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:54.574670 containerd[1464]: time="2026-01-20T00:42:54.569973856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 00:42:54.574670 containerd[1464]: time="2026-01-20T00:42:54.570121413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 00:42:54.574670 containerd[1464]: time="2026-01-20T00:42:54.574120481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 00:42:54.574967 kubelet[2534]: E0120 00:42:54.571481 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:54.574967 kubelet[2534]: E0120 00:42:54.571534 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 00:42:54.574967 kubelet[2534]: E0120 00:42:54.572728 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7dcfd46877-pcvss_calico-system(8c80f384-368b-4ca9-94db-bbaff86a3f79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:54.647501 containerd[1464]: time="2026-01-20T00:42:54.647166397Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:54.651647 containerd[1464]: time="2026-01-20T00:42:54.651594772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 00:42:54.651993 containerd[1464]: time="2026-01-20T00:42:54.651873668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 00:42:54.653483 kubelet[2534]: E0120 00:42:54.653190 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:54.654653 kubelet[2534]: E0120 00:42:54.653453 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 00:42:54.656011 kubelet[2534]: E0120 00:42:54.655967 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7dcfd46877-pcvss_calico-system(8c80f384-368b-4ca9-94db-bbaff86a3f79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:54.656089 kubelet[2534]: E0120 00:42:54.656033 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dcfd46877-pcvss" podUID="8c80f384-368b-4ca9-94db-bbaff86a3f79" Jan 20 00:42:54.682621 sshd[5474]: pam_unix(sshd:session): session closed for user core Jan 20 00:42:54.690767 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:58942.service: Deactivated successfully. Jan 20 00:42:54.693930 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 00:42:54.698072 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Jan 20 00:42:54.700087 systemd-logind[1442]: Removed session 13. Jan 20 00:42:59.493898 containerd[1464]: time="2026-01-20T00:42:59.493850470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:42:59.568017 containerd[1464]: time="2026-01-20T00:42:59.567880693Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:42:59.570927 containerd[1464]: time="2026-01-20T00:42:59.570483311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:42:59.570927 containerd[1464]: time="2026-01-20T00:42:59.570569002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:42:59.571172 kubelet[2534]: E0120 00:42:59.571115 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:59.572248 kubelet[2534]: E0120 00:42:59.571689 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:42:59.572248 kubelet[2534]: E0120 00:42:59.571794 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcdf9c988-cp8b5_calico-apiserver(183d032b-800c-4fa4-8adf-a6b6125809b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:42:59.572248 kubelet[2534]: E0120 00:42:59.571827 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" podUID="183d032b-800c-4fa4-8adf-a6b6125809b8" Jan 20 00:42:59.710912 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:58956.service - OpenSSH per-connection server daemon (10.0.0.1:58956). Jan 20 00:42:59.789637 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 58956 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:42:59.791125 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:42:59.799339 systemd-logind[1442]: New session 14 of user core. Jan 20 00:42:59.807276 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 00:43:00.008152 sshd[5494]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:00.014117 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:58956.service: Deactivated successfully. Jan 20 00:43:00.016913 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 00:43:00.018210 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Jan 20 00:43:00.020047 systemd-logind[1442]: Removed session 14. Jan 20 00:43:00.496969 containerd[1464]: time="2026-01-20T00:43:00.496899462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 00:43:00.561772 containerd[1464]: time="2026-01-20T00:43:00.561699836Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:43:00.563911 containerd[1464]: time="2026-01-20T00:43:00.563705605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 00:43:00.563911 containerd[1464]: time="2026-01-20T00:43:00.563818831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 00:43:00.564489 kubelet[2534]: E0120 00:43:00.564225 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:43:00.564489 kubelet[2534]: E0120 00:43:00.564342 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 00:43:00.564640 kubelet[2534]: E0120 00:43:00.564615 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c8bcc_calico-system(16f84c69-9c06-4195-a968-2c29cf809ca6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:00.564712 kubelet[2534]: E0120 00:43:00.564663 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:43:02.493444 kubelet[2534]: E0120 00:43:02.490786 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:43:02.494360 containerd[1464]: time="2026-01-20T00:43:02.494273644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 00:43:02.558154 containerd[1464]: time="2026-01-20T00:43:02.558005282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:43:02.560329 containerd[1464]: time="2026-01-20T00:43:02.560176522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 00:43:02.560329 containerd[1464]: time="2026-01-20T00:43:02.560279006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 00:43:02.560742 kubelet[2534]: E0120 00:43:02.560665 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:43:02.560803 kubelet[2534]: E0120 00:43:02.560747 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 00:43:02.561009 kubelet[2534]: E0120 00:43:02.560884 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5fcdf9c988-97g72_calico-apiserver(94df424b-d767-451c-98d9-6b195890f32a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:02.561009 kubelet[2534]: E0120 00:43:02.560923 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:43:03.492696 containerd[1464]: time="2026-01-20T00:43:03.492292213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 00:43:03.561085 containerd[1464]: time="2026-01-20T00:43:03.560975724Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:43:03.564274 containerd[1464]: time="2026-01-20T00:43:03.564089553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 00:43:03.564274 containerd[1464]: time="2026-01-20T00:43:03.564198631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 00:43:03.564521 kubelet[2534]: E0120 00:43:03.564320 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:43:03.564521 kubelet[2534]: E0120 00:43:03.564476 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 00:43:03.565028 kubelet[2534]: E0120 00:43:03.564694 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b77768bf7-8nrk9_calico-system(a7b5c492-f30c-4416-b600-12afd3fc29bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:03.565028 kubelet[2534]: E0120 00:43:03.564742 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" podUID="a7b5c492-f30c-4416-b600-12afd3fc29bc" Jan 20 00:43:04.492792 containerd[1464]: time="2026-01-20T00:43:04.492173265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 00:43:04.554494 containerd[1464]: time="2026-01-20T00:43:04.554151704Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:43:04.556494 containerd[1464]: time="2026-01-20T00:43:04.556318700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 00:43:04.556688 containerd[1464]: time="2026-01-20T00:43:04.556533288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 00:43:04.556965 kubelet[2534]: E0120 00:43:04.556882 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:43:04.557041 kubelet[2534]: E0120 00:43:04.556969 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 00:43:04.557541 kubelet[2534]: E0120 00:43:04.557079 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-q7994_calico-system(28fdd63b-baae-4d6e-b08c-1195d95658e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:04.559956 containerd[1464]: time="2026-01-20T00:43:04.559879942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 00:43:04.647347 containerd[1464]: time="2026-01-20T00:43:04.646706636Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 00:43:04.650872 containerd[1464]: time="2026-01-20T00:43:04.650230455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 00:43:04.650872 containerd[1464]: time="2026-01-20T00:43:04.650324356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 00:43:04.652140 kubelet[2534]: E0120 00:43:04.652054 2534 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:43:04.652760 kubelet[2534]: E0120 00:43:04.652156 2534 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 00:43:04.652760 kubelet[2534]: E0120 00:43:04.652269 2534 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-q7994_calico-system(28fdd63b-baae-4d6e-b08c-1195d95658e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 00:43:04.652760 kubelet[2534]: E0120 00:43:04.652338 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:43:05.024289 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:36886.service - OpenSSH per-connection server daemon (10.0.0.1:36886). Jan 20 00:43:05.081956 sshd[5516]: Accepted publickey for core from 10.0.0.1 port 36886 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:43:05.084031 sshd[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:05.089726 systemd-logind[1442]: New session 15 of user core. Jan 20 00:43:05.098649 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 00:43:05.257444 sshd[5516]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:05.263862 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:36886.service: Deactivated successfully. Jan 20 00:43:05.266007 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 00:43:05.272213 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Jan 20 00:43:05.273761 systemd-logind[1442]: Removed session 15. Jan 20 00:43:05.489165 kubelet[2534]: E0120 00:43:05.489089 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:43:07.491490 kubelet[2534]: E0120 00:43:07.490898 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dcfd46877-pcvss" podUID="8c80f384-368b-4ca9-94db-bbaff86a3f79" Jan 20 00:43:10.276673 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:36900.service - OpenSSH per-connection server daemon (10.0.0.1:36900). Jan 20 00:43:10.314141 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 36900 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:43:10.315748 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:10.323309 systemd-logind[1442]: New session 16 of user core. Jan 20 00:43:10.332801 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 00:43:10.492483 kubelet[2534]: E0120 00:43:10.491244 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:43:10.495766 sshd[5532]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:10.507920 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:36900.service: Deactivated successfully. Jan 20 00:43:10.511046 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 00:43:10.513361 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Jan 20 00:43:10.525875 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:36904.service - OpenSSH per-connection server daemon (10.0.0.1:36904). Jan 20 00:43:10.528089 systemd-logind[1442]: Removed session 16. Jan 20 00:43:10.581034 sshd[5548]: Accepted publickey for core from 10.0.0.1 port 36904 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:43:10.583128 sshd[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:10.590547 systemd-logind[1442]: New session 17 of user core. Jan 20 00:43:10.599631 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 00:43:11.008594 sshd[5548]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:11.025140 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:36904.service: Deactivated successfully. Jan 20 00:43:11.030911 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 00:43:11.034623 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Jan 20 00:43:11.048899 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:36912.service - OpenSSH per-connection server daemon (10.0.0.1:36912). Jan 20 00:43:11.056078 systemd-logind[1442]: Removed session 17. Jan 20 00:43:11.123727 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 36912 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:43:11.126037 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:11.132275 systemd-logind[1442]: New session 18 of user core. Jan 20 00:43:11.145642 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 00:43:11.491411 kubelet[2534]: E0120 00:43:11.491312 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:43:11.720600 sshd[5561]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:11.729794 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:36912.service: Deactivated successfully. Jan 20 00:43:11.732574 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 00:43:11.735984 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Jan 20 00:43:11.745747 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:36922.service - OpenSSH per-connection server daemon (10.0.0.1:36922). Jan 20 00:43:11.748593 systemd-logind[1442]: Removed session 18. Jan 20 00:43:11.816943 sshd[5579]: Accepted publickey for core from 10.0.0.1 port 36922 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:43:11.821120 sshd[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:11.828124 systemd-logind[1442]: New session 19 of user core. Jan 20 00:43:11.848573 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 00:43:12.135210 sshd[5579]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:12.146574 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:36922.service: Deactivated successfully. Jan 20 00:43:12.151843 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 00:43:12.154425 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Jan 20 00:43:12.165724 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:36924.service - OpenSSH per-connection server daemon (10.0.0.1:36924). Jan 20 00:43:12.169790 systemd-logind[1442]: Removed session 19. Jan 20 00:43:12.213223 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 36924 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:43:12.216108 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:12.227499 systemd-logind[1442]: New session 20 of user core. Jan 20 00:43:12.233608 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 00:43:12.389186 sshd[5591]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:12.393273 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:36924.service: Deactivated successfully. Jan 20 00:43:12.396426 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 00:43:12.400351 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Jan 20 00:43:12.402589 systemd-logind[1442]: Removed session 20. Jan 20 00:43:13.490845 kubelet[2534]: E0120 00:43:13.490798 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:43:15.491257 kubelet[2534]: E0120 00:43:15.490631 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-cp8b5" podUID="183d032b-800c-4fa4-8adf-a6b6125809b8" Jan 20 00:43:17.406586 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:36294.service - OpenSSH per-connection server daemon (10.0.0.1:36294). Jan 20 00:43:17.448818 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 36294 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:43:17.450596 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:17.455691 systemd-logind[1442]: New session 21 of user core. Jan 20 00:43:17.461558 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 00:43:17.592113 sshd[5609]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:17.596758 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:36294.service: Deactivated successfully. Jan 20 00:43:17.598817 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 00:43:17.599689 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Jan 20 00:43:17.601284 systemd-logind[1442]: Removed session 21. Jan 20 00:43:18.490503 kubelet[2534]: E0120 00:43:18.489891 2534 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 00:43:18.497801 kubelet[2534]: E0120 00:43:18.497759 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b77768bf7-8nrk9" podUID="a7b5c492-f30c-4416-b600-12afd3fc29bc" Jan 20 00:43:18.498024 kubelet[2534]: E0120 00:43:18.497955 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8" Jan 20 00:43:19.490672 kubelet[2534]: E0120 00:43:19.490578 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dcfd46877-pcvss" podUID="8c80f384-368b-4ca9-94db-bbaff86a3f79" Jan 20 00:43:22.603963 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:60660.service - OpenSSH per-connection server daemon (10.0.0.1:60660). Jan 20 00:43:22.644563 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 60660 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:43:22.646027 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:22.651242 systemd-logind[1442]: New session 22 of user core. Jan 20 00:43:22.659589 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 00:43:22.799790 sshd[5625]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:22.803237 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:60660.service: Deactivated successfully. Jan 20 00:43:22.806090 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 00:43:22.808845 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Jan 20 00:43:22.810622 systemd-logind[1442]: Removed session 22. Jan 20 00:43:23.058785 systemd[1]: run-containerd-runc-k8s.io-8fb9fb9c455d2ae6d763ddba4785dd1a74837695de16924ab70b4cf82e734a1d-runc.vgNe9Z.mount: Deactivated successfully. Jan 20 00:43:25.490261 kubelet[2534]: E0120 00:43:25.490157 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fcdf9c988-97g72" podUID="94df424b-d767-451c-98d9-6b195890f32a" Jan 20 00:43:26.491265 kubelet[2534]: E0120 00:43:26.490865 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c8bcc" podUID="16f84c69-9c06-4195-a968-2c29cf809ca6" Jan 20 00:43:27.814285 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:60664.service - OpenSSH per-connection server daemon (10.0.0.1:60664). Jan 20 00:43:27.876568 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 60664 ssh2: RSA SHA256:QDu6mA/7nBrCWQCf0iTeSucNTVZb4RccwmaEJSjwzPc Jan 20 00:43:27.879282 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 00:43:27.887471 systemd-logind[1442]: New session 23 of user core. Jan 20 00:43:27.895659 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 00:43:28.077077 sshd[5662]: pam_unix(sshd:session): session closed for user core Jan 20 00:43:28.083214 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:60664.service: Deactivated successfully. Jan 20 00:43:28.086209 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 00:43:28.087639 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Jan 20 00:43:28.089518 systemd-logind[1442]: Removed session 23. Jan 20 00:43:29.490355 kubelet[2534]: E0120 00:43:29.490279 2534 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q7994" podUID="28fdd63b-baae-4d6e-b08c-1195d95658e8"