Jan 28 00:58:56.554812 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 00:58:56.554840 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:58:56.554856 kernel: BIOS-provided physical RAM map: Jan 28 00:58:56.554865 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 00:58:56.554874 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 00:58:56.554883 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 00:58:56.554893 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 28 00:58:56.554901 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 28 00:58:56.554910 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 00:58:56.554922 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 00:58:56.554932 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 00:58:56.554940 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 00:58:56.554949 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 00:58:56.554959 kernel: NX (Execute Disable) protection: active Jan 28 00:58:56.554968 kernel: APIC: Static calls initialized Jan 28 00:58:56.554980 kernel: SMBIOS 2.8 present. Jan 28 00:58:56.554990 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 28 00:58:56.554998 kernel: Hypervisor detected: KVM Jan 28 00:58:56.555007 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 00:58:56.555015 kernel: kvm-clock: using sched offset of 7417117143 cycles Jan 28 00:58:56.555025 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 00:58:56.555034 kernel: tsc: Detected 2445.424 MHz processor Jan 28 00:58:56.555043 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 00:58:56.555052 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 00:58:56.555065 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 28 00:58:56.555075 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 00:58:56.555085 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 00:58:56.555095 kernel: Using GB pages for direct mapping Jan 28 00:58:56.555105 kernel: ACPI: Early table checksum verification disabled Jan 28 00:58:56.555115 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 28 00:58:56.555124 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:56.555134 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:56.555144 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:56.555221 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 28 00:58:56.555232 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:56.555242 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:56.555251 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:56.555259 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:56.555268 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 28 00:58:56.555278 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 28 00:58:56.555294 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 28 00:58:56.555310 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 28 00:58:56.555322 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 28 00:58:56.555332 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 28 00:58:56.555341 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 28 00:58:56.555431 kernel: No NUMA configuration found Jan 28 00:58:56.555441 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 28 00:58:56.555455 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 28 00:58:56.555465 kernel: Zone ranges: Jan 28 00:58:56.555475 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 00:58:56.555485 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 28 00:58:56.555494 kernel: Normal empty Jan 28 00:58:56.555504 kernel: Movable zone start for each node Jan 28 00:58:56.555515 kernel: Early memory node ranges Jan 28 00:58:56.555525 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 00:58:56.555535 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 28 00:58:56.555546 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 28 00:58:56.555559 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 00:58:56.555569 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 00:58:56.555579 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 28 00:58:56.555590 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 00:58:56.555600 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 00:58:56.555610 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 00:58:56.555620 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 00:58:56.555630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 00:58:56.555640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 00:58:56.555654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 00:58:56.555666 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 00:58:56.555676 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 00:58:56.555686 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 00:58:56.555695 kernel: TSC deadline timer available Jan 28 00:58:56.555704 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 28 00:58:56.555713 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 00:58:56.555722 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 00:58:56.555731 kernel: kvm-guest: setup PV sched yield Jan 28 00:58:56.555744 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 00:58:56.555753 kernel: Booting paravirtualized kernel on KVM Jan 28 00:58:56.555763 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 00:58:56.555772 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 00:58:56.555782 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 28 00:58:56.555791 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 28 00:58:56.555800 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 00:58:56.555810 kernel: kvm-guest: PV spinlocks enabled Jan 28 00:58:56.555819 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 00:58:56.555832 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:58:56.555842 kernel: random: crng init done Jan 28 00:58:56.555851 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 00:58:56.555860 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:58:56.555870 kernel: Fallback order for Node 0: 0 Jan 28 00:58:56.555879 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 28 00:58:56.555889 kernel: Policy zone: DMA32 Jan 28 00:58:56.555898 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:58:56.555908 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 136884K reserved, 0K cma-reserved) Jan 28 00:58:56.555921 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 00:58:56.555930 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 00:58:56.555940 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 00:58:56.555949 kernel: Dynamic Preempt: voluntary Jan 28 00:58:56.555959 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:58:56.555969 kernel: rcu: RCU event tracing is enabled. Jan 28 00:58:56.555979 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 00:58:56.555988 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:58:56.555998 kernel: Rude variant of Tasks RCU enabled. Jan 28 00:58:56.556011 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:58:56.556020 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:58:56.556029 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 00:58:56.556038 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 00:58:56.556048 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:58:56.556058 kernel: Console: colour VGA+ 80x25 Jan 28 00:58:56.556068 kernel: printk: console [ttyS0] enabled Jan 28 00:58:56.556078 kernel: ACPI: Core revision 20230628 Jan 28 00:58:56.556087 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 00:58:56.556100 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 00:58:56.556110 kernel: x2apic enabled Jan 28 00:58:56.556119 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 00:58:56.556129 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 00:58:56.556138 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 00:58:56.556147 kernel: kvm-guest: setup PV IPIs Jan 28 00:58:56.556219 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 00:58:56.556246 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 28 00:58:56.556256 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 28 00:58:56.556266 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 00:58:56.556277 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 00:58:56.556290 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 00:58:56.556305 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 00:58:56.556318 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 00:58:56.556329 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 00:58:56.556339 kernel: Speculative Store Bypass: Vulnerable Jan 28 00:58:56.556479 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 00:58:56.556491 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 00:58:56.556501 kernel: active return thunk: srso_alias_return_thunk Jan 28 00:58:56.556512 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 00:58:56.556522 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 00:58:56.556532 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 00:58:56.556542 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 00:58:56.556552 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 00:58:56.556562 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 00:58:56.556577 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 00:58:56.556588 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 00:58:56.556640 kernel: Freeing SMP alternatives memory: 32K Jan 28 00:58:56.556683 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:58:56.556694 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 00:58:56.556734 kernel: landlock: Up and running. Jan 28 00:58:56.556774 kernel: SELinux: Initializing. Jan 28 00:58:56.556784 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:58:56.556826 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:58:56.556873 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 00:58:56.556918 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:58:56.556980 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:58:56.556992 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 00:58:56.557003 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 00:58:56.557012 kernel: signal: max sigframe size: 1776 Jan 28 00:58:56.557022 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:58:56.557032 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:58:56.557046 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 00:58:56.557056 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:58:56.557066 kernel: smpboot: x86: Booting SMP configuration: Jan 28 00:58:56.557076 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 00:58:56.557085 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 00:58:56.557095 kernel: smpboot: Max logical packages: 1 Jan 28 00:58:56.557105 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 28 00:58:56.557114 kernel: devtmpfs: initialized Jan 28 00:58:56.557124 kernel: x86/mm: Memory block size: 128MB Jan 28 00:58:56.557135 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:58:56.557202 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 00:58:56.557218 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:58:56.557231 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:58:56.557242 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:58:56.557253 kernel: audit: type=2000 audit(1769561933.055:1): state=initialized audit_enabled=0 res=1 Jan 28 00:58:56.557263 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:58:56.557273 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 00:58:56.557284 kernel: cpuidle: using governor menu Jan 28 00:58:56.557293 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:58:56.557307 kernel: dca service started, version 1.12.1 Jan 28 00:58:56.557317 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 00:58:56.557330 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 00:58:56.557342 kernel: PCI: Using configuration type 1 for base access Jan 28 00:58:56.557433 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 00:58:56.557443 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:58:56.557453 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:58:56.557463 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:58:56.557478 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:58:56.557489 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:58:56.557500 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:58:56.557511 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:58:56.557522 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:58:56.557532 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 00:58:56.557543 kernel: ACPI: Interpreter enabled Jan 28 00:58:56.557554 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 00:58:56.557565 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 00:58:56.557576 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 00:58:56.557589 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 00:58:56.557599 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 00:58:56.557609 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 00:58:56.557864 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 00:58:56.558096 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 00:58:56.558508 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 00:58:56.558525 kernel: PCI host bridge to bus 0000:00 Jan 28 00:58:56.558760 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 00:58:56.558921 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 00:58:56.559068 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 00:58:56.559298 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 28 00:58:56.559538 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 00:58:56.559685 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 28 00:58:56.559838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 00:58:56.560017 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 00:58:56.560279 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 28 00:58:56.560545 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 28 00:58:56.560706 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 28 00:58:56.560866 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 28 00:58:56.561034 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 00:58:56.561286 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 10742 usecs Jan 28 00:58:56.561572 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 28 00:58:56.561751 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 28 00:58:56.561911 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 28 00:58:56.562074 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 28 00:58:56.562533 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 28 00:58:56.562900 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 28 00:58:56.563070 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 28 00:58:56.563304 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 28 00:58:56.563584 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 28 00:58:56.563754 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 28 00:58:56.563916 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 28 00:58:56.564071 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 28 00:58:56.564319 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 28 00:58:56.564590 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 00:58:56.564752 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 00:58:56.564915 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 00:58:56.565071 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 28 00:58:56.565543 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 28 00:58:56.565726 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 00:58:56.565890 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 28 00:58:56.565905 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 00:58:56.565915 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 00:58:56.565926 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 00:58:56.565936 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 00:58:56.565946 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 00:58:56.565956 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 00:58:56.565966 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 00:58:56.565976 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 00:58:56.565990 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 00:58:56.566001 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 00:58:56.566011 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 00:58:56.566021 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 00:58:56.566031 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 00:58:56.566041 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 00:58:56.566050 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 00:58:56.566060 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 00:58:56.566070 kernel: iommu: Default domain type: Translated Jan 28 00:58:56.566083 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 00:58:56.566094 kernel: PCI: Using ACPI for IRQ routing Jan 28 00:58:56.566104 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 00:58:56.566114 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 00:58:56.566125 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 28 00:58:56.566607 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 00:58:56.566764 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 00:58:56.566918 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 00:58:56.566938 kernel: vgaarb: loaded Jan 28 00:58:56.566949 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 00:58:56.566959 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 00:58:56.566969 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 00:58:56.566979 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:58:56.566990 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:58:56.567000 kernel: pnp: PnP ACPI init Jan 28 00:58:56.567297 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 00:58:56.567316 kernel: pnp: PnP ACPI: found 6 devices Jan 28 00:58:56.567334 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 00:58:56.567464 kernel: NET: Registered PF_INET protocol family Jan 28 00:58:56.567477 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 00:58:56.567488 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 00:58:56.567498 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:58:56.567509 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:58:56.567519 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 00:58:56.567530 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 00:58:56.567544 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:58:56.567555 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:58:56.567565 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:58:56.567575 kernel: NET: Registered PF_XDP protocol family Jan 28 00:58:56.567732 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 00:58:56.567874 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 00:58:56.568015 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 00:58:56.568486 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 28 00:58:56.568640 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 00:58:56.568788 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 28 00:58:56.568802 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:58:56.568812 kernel: Initialise system trusted keyrings Jan 28 00:58:56.568822 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 00:58:56.568832 kernel: Key type asymmetric registered Jan 28 00:58:56.568841 kernel: Asymmetric key parser 'x509' registered Jan 28 00:58:56.568851 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 00:58:56.568861 kernel: io scheduler mq-deadline registered Jan 28 00:58:56.568872 kernel: io scheduler kyber registered Jan 28 00:58:56.568886 kernel: io scheduler bfq registered Jan 28 00:58:56.568896 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 00:58:56.568907 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 00:58:56.568917 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 00:58:56.568927 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 00:58:56.568937 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:58:56.568947 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 00:58:56.568958 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 00:58:56.568968 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 00:58:56.568981 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 00:58:56.568991 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 00:58:56.569300 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 00:58:56.569626 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 00:58:56.569775 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T00:58:55 UTC (1769561935) Jan 28 00:58:56.569932 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 28 00:58:56.569949 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 00:58:56.569962 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:58:56.569979 kernel: Segment Routing with IPv6 Jan 28 00:58:56.569989 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:58:56.569999 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:58:56.570009 kernel: Key type dns_resolver registered Jan 28 00:58:56.570019 kernel: IPI shorthand broadcast: enabled Jan 28 00:58:56.570029 kernel: sched_clock: Marking stable (2260046036, 339987078)->(2908485024, -308451910) Jan 28 00:58:56.570040 kernel: registered taskstats version 1 Jan 28 00:58:56.570049 kernel: Loading compiled-in X.509 certificates Jan 28 00:58:56.570060 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 00:58:56.570073 kernel: Key type .fscrypt registered Jan 28 00:58:56.570083 kernel: Key type fscrypt-provisioning registered Jan 28 00:58:56.570093 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:58:56.570103 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:58:56.570113 kernel: ima: No architecture policies found Jan 28 00:58:56.570123 kernel: clk: Disabling unused clocks Jan 28 00:58:56.570132 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 00:58:56.570142 kernel: Write protecting the kernel read-only data: 36864k Jan 28 00:58:56.570310 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 00:58:56.570322 kernel: Run /init as init process Jan 28 00:58:56.570332 kernel: with arguments: Jan 28 00:58:56.570484 kernel: /init Jan 28 00:58:56.570496 kernel: with environment: Jan 28 00:58:56.570507 kernel: HOME=/ Jan 28 00:58:56.570518 kernel: TERM=linux Jan 28 00:58:56.570530 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:58:56.570547 systemd[1]: Detected virtualization kvm. Jan 28 00:58:56.570559 systemd[1]: Detected architecture x86-64. Jan 28 00:58:56.570569 systemd[1]: Running in initrd. Jan 28 00:58:56.570580 systemd[1]: No hostname configured, using default hostname. Jan 28 00:58:56.570590 systemd[1]: Hostname set to . Jan 28 00:58:56.570602 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:58:56.570613 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:58:56.570623 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:58:56.570638 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:58:56.570650 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:58:56.570660 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:58:56.570671 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:58:56.570681 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:58:56.570694 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 00:58:56.570706 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 00:58:56.570720 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:58:56.570730 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:58:56.570741 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:58:56.570751 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:58:56.570776 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:58:56.570790 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:58:56.570804 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:58:56.570816 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:58:56.570827 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:58:56.570838 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 00:58:56.570848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:58:56.570860 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:58:56.570871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:58:56.570881 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:58:56.570891 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:58:56.570904 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:58:56.570915 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:58:56.570926 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:58:56.570937 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:58:56.570948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:58:56.570960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:58:56.571003 systemd-journald[193]: Collecting audit messages is disabled. Jan 28 00:58:56.571031 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:58:56.571043 systemd-journald[193]: Journal started Jan 28 00:58:56.571065 systemd-journald[193]: Runtime Journal (/run/log/journal/681dc7c8976243fd8e34d7cc66c55c38) is 6.0M, max 48.4M, 42.3M free. Jan 28 00:58:56.584560 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:58:56.589850 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:58:56.597218 systemd-modules-load[194]: Inserted module 'overlay' Jan 28 00:58:56.598485 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:58:56.626660 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:58:56.641819 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:58:56.941752 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:58:56.941795 kernel: Bridge firewalling registered Jan 28 00:58:56.665573 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 28 00:58:56.942293 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:58:56.951785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:58:56.952321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:58:57.001768 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:58:57.015537 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:58:57.028551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:58:57.035483 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:58:57.052487 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:58:57.064890 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:58:57.086943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:58:57.133513 dracut-cmdline[227]: dracut-dracut-053 Jan 28 00:58:57.133513 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:58:57.095305 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:58:57.106103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:58:57.185207 systemd-resolved[233]: Positive Trust Anchors: Jan 28 00:58:57.185221 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:58:57.185247 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:58:57.189641 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 28 00:58:57.191292 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:58:57.196908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:58:57.284509 kernel: SCSI subsystem initialized Jan 28 00:58:57.302279 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:58:57.318502 kernel: iscsi: registered transport (tcp) Jan 28 00:58:57.349654 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:58:57.349749 kernel: QLogic iSCSI HBA Driver Jan 28 00:58:57.422145 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:58:57.440654 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:58:57.490518 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:58:57.490596 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:58:57.495601 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 00:58:57.550646 kernel: raid6: avx2x4 gen() 23298 MB/s Jan 28 00:58:57.570530 kernel: raid6: avx2x2 gen() 20421 MB/s Jan 28 00:58:57.591793 kernel: raid6: avx2x1 gen() 13772 MB/s Jan 28 00:58:57.591890 kernel: raid6: using algorithm avx2x4 gen() 23298 MB/s Jan 28 00:58:57.613062 kernel: raid6: .... xor() 3649 MB/s, rmw enabled Jan 28 00:58:57.613140 kernel: raid6: using avx2x2 recovery algorithm Jan 28 00:58:57.642563 kernel: xor: automatically using best checksumming function avx Jan 28 00:58:57.861567 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:58:57.884829 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:58:57.911849 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:58:57.927654 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jan 28 00:58:57.933001 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:58:57.940911 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:58:57.965643 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Jan 28 00:58:58.017622 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:58:58.035747 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:58:58.133765 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:58:58.159668 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:58:58.190509 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 00:58:58.200418 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 00:58:58.223558 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 00:58:58.223591 kernel: GPT:9289727 != 19775487 Jan 28 00:58:58.223607 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 00:58:58.223626 kernel: GPT:9289727 != 19775487 Jan 28 00:58:58.223640 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 00:58:58.223655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:58.194682 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:58:58.224708 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:58:58.234717 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:58:58.241298 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:58:58.283631 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 00:58:58.283703 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Jan 28 00:58:58.288727 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:58:58.311590 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (460) Jan 28 00:58:58.321486 kernel: libata version 3.00 loaded. Jan 28 00:58:58.322917 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 00:58:58.344227 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:58:58.356018 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 00:58:58.356459 kernel: AVX2 version of gcm_enc/dec engaged. Jan 28 00:58:58.356479 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 00:58:58.374587 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 00:58:58.374869 kernel: AES CTR mode by8 optimization enabled Jan 28 00:58:58.375338 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 00:58:58.387485 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 00:58:58.387772 kernel: scsi host0: ahci Jan 28 00:58:58.397938 kernel: scsi host1: ahci Jan 28 00:58:58.398323 kernel: scsi host2: ahci Jan 28 00:58:58.401506 kernel: scsi host3: ahci Jan 28 00:58:58.404523 kernel: scsi host4: ahci Jan 28 00:58:58.408440 kernel: scsi host5: ahci Jan 28 00:58:58.425328 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 28 00:58:58.425483 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 28 00:58:58.425499 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 28 00:58:58.425514 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 28 00:58:58.430270 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 28 00:58:58.438788 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 28 00:58:58.446911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:58:58.464829 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 00:58:58.484020 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 00:58:58.515950 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:58:58.533624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:58:58.533765 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:58:58.560614 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:58.560656 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:58.560673 disk-uuid[555]: Primary Header is updated. Jan 28 00:58:58.560673 disk-uuid[555]: Secondary Entries is updated. Jan 28 00:58:58.560673 disk-uuid[555]: Secondary Header is updated. Jan 28 00:58:58.614912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:58.560765 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:58:58.575339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:58:58.575663 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:58:58.600595 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:58:58.638535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:58:58.749432 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:58.749511 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:58.750516 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 00:58:58.752448 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:58.752494 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 00:58:58.752513 kernel: ata3.00: applying bridge limits Jan 28 00:58:58.752969 kernel: ata3.00: configured for UDMA/100 Jan 28 00:58:58.755429 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 00:58:58.757490 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:58.758502 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:58.847808 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 00:58:58.848335 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 00:58:58.866556 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 00:58:59.104025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:58:59.121859 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:58:59.143821 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:58:59.584482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:59.586226 disk-uuid[556]: The operation has completed successfully. Jan 28 00:58:59.651853 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:58:59.652085 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:58:59.693046 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 00:58:59.712711 sh[596]: Success Jan 28 00:58:59.762564 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 28 00:58:59.842663 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:58:59.869321 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 00:58:59.881038 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 00:58:59.915063 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 00:58:59.915134 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:58:59.915146 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 00:58:59.919780 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:58:59.923483 kernel: BTRFS info (device dm-0): using free space tree Jan 28 00:58:59.943636 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 00:58:59.952057 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:58:59.971838 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:58:59.978477 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:59:00.019737 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:59:00.019817 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:59:00.019837 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:59:00.035446 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:59:00.051884 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 00:59:00.060669 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:59:00.078752 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:59:00.091718 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:59:00.243242 ignition[706]: Ignition 2.19.0 Jan 28 00:59:00.243260 ignition[706]: Stage: fetch-offline Jan 28 00:59:00.243323 ignition[706]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:59:00.243338 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:59:00.243624 ignition[706]: parsed url from cmdline: "" Jan 28 00:59:00.243631 ignition[706]: no config URL provided Jan 28 00:59:00.243639 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:59:00.263092 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:59:00.243652 ignition[706]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:59:00.243700 ignition[706]: op(1): [started] loading QEMU firmware config module Jan 28 00:59:00.243716 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 00:59:00.308289 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:59:00.327994 ignition[706]: op(1): [finished] loading QEMU firmware config module Jan 28 00:59:00.349049 systemd-networkd[785]: lo: Link UP Jan 28 00:59:00.349107 systemd-networkd[785]: lo: Gained carrier Jan 28 00:59:00.351616 systemd-networkd[785]: Enumeration completed Jan 28 00:59:00.351757 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:59:00.353950 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:59:00.353956 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:59:00.356017 systemd-networkd[785]: eth0: Link UP Jan 28 00:59:00.356023 systemd-networkd[785]: eth0: Gained carrier Jan 28 00:59:00.356033 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:59:00.371096 systemd[1]: Reached target network.target - Network. Jan 28 00:59:00.420136 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 00:59:00.508266 systemd-resolved[233]: Detected conflict on linux IN A 10.0.0.45 Jan 28 00:59:00.508339 systemd-resolved[233]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 28 00:59:00.829299 ignition[706]: parsing config with SHA512: 04d78ed7eb6a9812f7e25f00da2d2e7fc9e8acee4c87964562edc55168a440f2eb0ac32e534a48b432bb506d4b366bc8adb1a045fe210ad1833b9a75366eec5e Jan 28 00:59:00.836112 unknown[706]: fetched base config from "system" Jan 28 00:59:00.836129 unknown[706]: fetched user config from "qemu" Jan 28 00:59:00.836871 ignition[706]: fetch-offline: fetch-offline passed Jan 28 00:59:00.839927 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:59:00.836934 ignition[706]: Ignition finished successfully Jan 28 00:59:00.843049 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 00:59:00.864773 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:59:00.928785 ignition[789]: Ignition 2.19.0 Jan 28 00:59:00.928840 ignition[789]: Stage: kargs Jan 28 00:59:00.929085 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:59:00.929108 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:59:00.930690 ignition[789]: kargs: kargs passed Jan 28 00:59:00.930760 ignition[789]: Ignition finished successfully Jan 28 00:59:00.955101 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:59:00.984661 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:59:01.017883 ignition[797]: Ignition 2.19.0 Jan 28 00:59:01.017930 ignition[797]: Stage: disks Jan 28 00:59:01.018227 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:59:01.018246 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:59:01.019097 ignition[797]: disks: disks passed Jan 28 00:59:01.019142 ignition[797]: Ignition finished successfully Jan 28 00:59:01.042009 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:59:01.047767 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:59:01.056899 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:59:01.061982 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:59:01.071139 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:59:01.077256 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:59:01.112675 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:59:01.142999 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 28 00:59:01.156631 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:59:01.180949 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:59:01.352690 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 00:59:01.357285 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:59:01.362547 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:59:01.395918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:59:01.404465 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:59:01.415433 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jan 28 00:59:01.418254 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 00:59:01.432717 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:59:01.432752 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:59:01.432772 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:59:01.418459 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:59:01.454622 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:59:01.418495 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:59:01.453786 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:59:01.499257 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:59:01.501712 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:59:01.586339 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 00:59:01.600047 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 28 00:59:01.609058 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 00:59:01.619909 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 00:59:01.644658 systemd-networkd[785]: eth0: Gained IPv6LL Jan 28 00:59:01.822914 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:59:01.850684 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:59:01.859457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:59:01.899629 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:59:01.900583 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:59:02.815035 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:59:02.844440 ignition[928]: INFO : Ignition 2.19.0 Jan 28 00:59:02.844440 ignition[928]: INFO : Stage: mount Jan 28 00:59:02.844440 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:59:02.844440 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:59:02.863790 ignition[928]: INFO : mount: mount passed Jan 28 00:59:02.863790 ignition[928]: INFO : Ignition finished successfully Jan 28 00:59:02.894441 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:59:02.913657 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:59:02.948738 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:59:02.980480 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Jan 28 00:59:02.989223 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:59:02.989289 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:59:02.989315 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:59:03.004604 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:59:03.007996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:59:04.056524 ignition[958]: INFO : Ignition 2.19.0 Jan 28 00:59:04.056524 ignition[958]: INFO : Stage: files Jan 28 00:59:04.056524 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:59:04.056524 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:59:04.081955 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:59:04.081955 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:59:04.081955 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:59:04.081955 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:59:04.081955 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:59:04.081955 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:59:04.074480 unknown[958]: wrote ssh authorized keys file for user: core Jan 28 00:59:04.127517 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 00:59:04.127517 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 00:59:04.294463 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 00:59:04.544015 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 00:59:04.544015 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:59:04.544015 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:59:04.544015 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:59:04.544015 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:59:04.628855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 00:59:04.843629 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 00:59:10.452770 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 00:59:10.452770 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 00:59:10.505447 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:59:10.505447 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:59:10.505447 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 00:59:10.505447 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 00:59:10.505447 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 00:59:10.505447 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 00:59:10.505447 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 00:59:10.505447 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 00:59:10.642616 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 00:59:10.661011 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 00:59:10.661011 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 00:59:10.661011 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:59:10.661011 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:59:10.661011 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:59:10.742657 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:59:10.742657 ignition[958]: INFO : files: files passed Jan 28 00:59:10.742657 ignition[958]: INFO : Ignition finished successfully Jan 28 00:59:10.689966 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:59:10.730869 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:59:10.746681 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:59:10.792666 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:59:10.793022 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:59:10.835875 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 00:59:10.855741 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:59:10.855741 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:59:10.892632 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:59:10.905516 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:59:10.925759 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:59:10.953880 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:59:11.049828 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:59:11.050136 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:59:11.056728 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:59:11.072539 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:59:11.082242 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:59:11.085615 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:59:11.131052 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:59:11.155897 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:59:11.205608 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:59:11.218659 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:59:11.231432 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:59:11.243467 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:59:11.248582 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:59:11.261675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:59:11.283692 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:59:11.294717 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:59:11.308665 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:59:11.326703 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:59:11.340427 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:59:11.352798 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:59:11.369319 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:59:11.396267 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:59:11.411567 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:59:11.423825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:59:11.429724 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:59:11.444080 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:59:11.461925 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:59:11.483895 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:59:11.488577 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:59:11.504951 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:59:11.512527 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:59:11.525080 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:59:11.530478 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:59:11.543073 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:59:11.551633 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:59:11.553604 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:59:11.558933 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:59:11.589698 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:59:11.612968 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:59:11.615115 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:59:11.633996 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:59:11.634297 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:59:11.645700 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:59:11.645874 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:59:11.652003 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:59:11.652278 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:59:11.697965 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:59:11.702991 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:59:11.721731 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:59:11.722077 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:59:11.729761 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:59:11.730004 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:59:11.756998 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:59:11.762789 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:59:11.762983 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:59:11.787081 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:59:11.787340 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:59:12.124442 ignition[1012]: INFO : Ignition 2.19.0 Jan 28 00:59:12.124442 ignition[1012]: INFO : Stage: umount Jan 28 00:59:12.124442 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:59:12.124442 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 00:59:12.145604 ignition[1012]: INFO : umount: umount passed Jan 28 00:59:12.145604 ignition[1012]: INFO : Ignition finished successfully Jan 28 00:59:12.159611 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:59:12.159884 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:59:12.186734 systemd[1]: Stopped target network.target - Network. Jan 28 00:59:12.193826 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:59:12.193921 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:59:12.219076 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:59:12.219280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:59:12.225249 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:59:12.225339 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:59:12.234552 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:59:12.234652 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:59:12.242525 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:59:12.242626 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:59:12.247326 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:59:12.263882 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:59:12.275495 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 28 00:59:12.280665 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:59:12.281061 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:59:12.292567 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:59:12.292798 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:59:12.308660 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:59:12.308749 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:59:12.328853 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:59:12.339421 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:59:12.339537 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:59:12.340512 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:59:12.340571 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:59:12.341478 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:59:12.341531 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:59:12.347775 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:59:12.347838 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:59:12.356257 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:59:12.404819 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:59:12.638758 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 28 00:59:12.405325 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:59:12.409955 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:59:12.410060 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:59:12.417986 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:59:12.418064 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:59:12.418453 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:59:12.418524 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:59:12.421908 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:59:12.421995 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:59:12.427792 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:59:12.427904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:59:12.435451 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:59:12.435504 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:59:12.435569 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:59:12.442312 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 28 00:59:12.442453 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:59:12.444441 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:59:12.444499 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:59:12.449513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:59:12.449571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:59:12.456571 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:59:12.457238 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:59:12.492865 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:59:12.493143 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:59:12.505115 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:59:12.513340 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:59:12.549588 systemd[1]: Switching root. Jan 28 00:59:12.806821 systemd-journald[193]: Journal stopped Jan 28 00:59:15.694706 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:59:15.694809 kernel: SELinux: policy capability open_perms=1 Jan 28 00:59:15.694827 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:59:15.694899 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:59:15.694916 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:59:15.694932 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:59:15.694947 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:59:15.694963 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:59:15.694979 kernel: audit: type=1403 audit(1769561952.965:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 00:59:15.695003 systemd[1]: Successfully loaded SELinux policy in 122.277ms. Jan 28 00:59:15.695034 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.531ms. Jan 28 00:59:15.695052 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:59:15.695068 systemd[1]: Detected virtualization kvm. Jan 28 00:59:15.695084 systemd[1]: Detected architecture x86-64. Jan 28 00:59:15.695100 systemd[1]: Detected first boot. Jan 28 00:59:15.695116 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:59:15.695134 zram_generator::config[1057]: No configuration found. Jan 28 00:59:15.695158 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:59:15.695174 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 00:59:15.695192 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 00:59:15.695273 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 00:59:15.695291 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:59:15.695307 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:59:15.695326 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:59:15.695342 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:59:15.695446 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:59:15.695463 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:59:15.695478 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:59:15.695494 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:59:15.695510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:59:15.695526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:59:15.695542 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:59:15.695558 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:59:15.695578 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:59:15.695595 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:59:15.695611 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 00:59:15.695627 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:59:15.695643 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 00:59:15.695665 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 00:59:15.695681 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 00:59:15.695698 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:59:15.695718 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:59:15.695735 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:59:15.695751 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:59:15.695768 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:59:15.695784 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:59:15.695800 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:59:15.695817 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:59:15.695832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:59:15.695849 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:59:15.695868 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:59:15.695884 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:59:15.695900 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:59:15.695916 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:59:15.695932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:59:15.695948 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:59:15.695963 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:59:15.695979 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:59:15.695997 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:59:15.696017 systemd[1]: Reached target machines.target - Containers. Jan 28 00:59:15.696034 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:59:15.696051 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:59:15.696067 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:59:15.696084 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:59:15.696101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:59:15.696118 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:59:15.696138 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:59:15.696160 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:59:15.696175 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:59:15.696253 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:59:15.696274 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 00:59:15.696293 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 00:59:15.696308 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 00:59:15.696328 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 00:59:15.696425 kernel: fuse: init (API version 7.39) Jan 28 00:59:15.696446 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:59:15.696468 kernel: loop: module loaded Jan 28 00:59:15.696485 kernel: ACPI: bus type drm_connector registered Jan 28 00:59:15.696501 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:59:15.696517 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:59:15.696533 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:59:15.696549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:59:15.696572 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 00:59:15.696588 systemd[1]: Stopped verity-setup.service. Jan 28 00:59:15.696606 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:59:15.696667 systemd-journald[1141]: Collecting audit messages is disabled. Jan 28 00:59:15.696706 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:59:15.696727 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:59:15.696745 systemd-journald[1141]: Journal started Jan 28 00:59:15.696773 systemd-journald[1141]: Runtime Journal (/run/log/journal/681dc7c8976243fd8e34d7cc66c55c38) is 6.0M, max 48.4M, 42.3M free. Jan 28 00:59:14.492628 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:59:14.529590 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 00:59:14.530613 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 00:59:14.531095 systemd[1]: systemd-journald.service: Consumed 2.379s CPU time. Jan 28 00:59:15.718861 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:59:15.720910 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:59:15.729580 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:59:15.736845 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:59:15.743945 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:59:15.750736 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:59:15.760820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:59:15.781737 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:59:15.782162 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:59:15.789089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:59:15.789609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:59:15.796293 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:59:15.797018 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:59:15.802998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:59:15.803558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:59:15.810327 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:59:15.810675 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:59:15.818680 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:59:15.819956 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:59:15.828268 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:59:15.836637 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:59:15.846312 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:59:15.882786 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:59:15.918513 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:59:15.940732 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:59:15.949797 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:59:15.956629 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:59:15.956735 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:59:15.965107 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 00:59:15.986679 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:59:15.993634 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:59:15.999804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:59:16.004070 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:59:16.012471 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:59:16.019505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:59:16.022074 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:59:16.027654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:59:16.032626 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:59:16.046793 systemd-journald[1141]: Time spent on flushing to /var/log/journal/681dc7c8976243fd8e34d7cc66c55c38 is 86.907ms for 946 entries. Jan 28 00:59:16.046793 systemd-journald[1141]: System Journal (/var/log/journal/681dc7c8976243fd8e34d7cc66c55c38) is 8.0M, max 195.6M, 187.6M free. Jan 28 00:59:16.223886 systemd-journald[1141]: Received client request to flush runtime journal. Jan 28 00:59:16.224041 kernel: loop0: detected capacity change from 0 to 142488 Jan 28 00:59:16.042618 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:59:16.295926 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:59:16.314803 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 00:59:16.327789 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:59:16.336632 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:59:16.347324 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:59:16.358109 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:59:16.422886 kernel: hrtimer: interrupt took 3156304 ns Jan 28 00:59:16.413150 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:59:16.606608 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:59:16.631174 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 00:59:16.640727 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:59:16.651623 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 28 00:59:16.698701 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:59:16.700587 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:59:16.701736 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 00:59:17.055769 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 28 00:59:17.055830 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 28 00:59:17.100551 kernel: loop1: detected capacity change from 0 to 140768 Jan 28 00:59:17.112068 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:59:17.128061 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:59:17.311694 kernel: loop2: detected capacity change from 0 to 224512 Jan 28 00:59:17.430602 kernel: loop3: detected capacity change from 0 to 142488 Jan 28 00:59:17.452548 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:59:17.472051 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:59:17.485468 kernel: loop4: detected capacity change from 0 to 140768 Jan 28 00:59:17.618868 kernel: loop5: detected capacity change from 0 to 224512 Jan 28 00:59:17.648804 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 00:59:17.650115 (sd-merge)[1194]: Merged extensions into '/usr'. Jan 28 00:59:17.656568 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:59:17.656628 systemd[1]: Reloading... Jan 28 00:59:17.960967 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 28 00:59:17.961025 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jan 28 00:59:18.399749 zram_generator::config[1221]: No configuration found. Jan 28 00:59:18.798720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:59:19.136720 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:59:19.146164 systemd[1]: Reloading finished in 1488 ms. Jan 28 00:59:19.510278 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:59:19.516607 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:59:19.524490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:59:19.549915 systemd[1]: Starting ensure-sysext.service... Jan 28 00:59:19.556791 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:59:19.594833 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:59:19.595289 systemd[1]: Reloading... Jan 28 00:59:19.685961 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:59:19.686281 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 00:59:19.687217 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 00:59:19.687572 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 28 00:59:19.687647 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 28 00:59:19.693302 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:59:19.693467 systemd-tmpfiles[1263]: Skipping /boot Jan 28 00:59:20.013599 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:59:20.013719 systemd-tmpfiles[1263]: Skipping /boot Jan 28 00:59:20.045483 zram_generator::config[1289]: No configuration found. Jan 28 00:59:20.502488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:59:20.562693 systemd[1]: Reloading finished in 963 ms. Jan 28 00:59:20.589542 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:59:20.606130 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:59:20.722340 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:59:20.730512 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:59:20.737730 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:59:20.754138 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:59:20.762737 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:59:20.786080 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:59:20.794623 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:59:20.794922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:59:20.798901 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:59:20.805978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:59:20.813706 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:59:20.817911 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:59:20.823052 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Jan 28 00:59:20.824555 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:59:20.829481 augenrules[1352]: No rules Jan 28 00:59:20.828423 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:59:20.829802 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:59:20.834745 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:59:20.842122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:59:20.842704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:59:20.847643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:59:20.847904 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:59:20.859024 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:59:20.860408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:59:20.896980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:59:20.904290 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:59:20.951687 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:59:20.963766 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:59:20.964069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:59:20.999756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:59:21.006010 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:59:21.024506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:59:21.045673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:59:21.049767 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:59:21.061857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:59:21.087712 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:59:21.090876 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:59:21.090927 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:59:21.092002 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:59:21.096513 systemd[1]: Finished ensure-sysext.service. Jan 28 00:59:21.101049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:59:21.101632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:59:21.102440 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:59:21.102702 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:59:21.103501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:59:21.103723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:59:21.104836 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:59:21.105096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:59:21.259431 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 00:59:21.259864 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:59:21.259954 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:59:21.272736 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 00:59:21.523507 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1382) Jan 28 00:59:21.529282 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:59:21.610687 systemd-resolved[1338]: Positive Trust Anchors: Jan 28 00:59:21.610727 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:59:21.610754 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:59:21.616440 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 28 00:59:21.622466 kernel: ACPI: button: Power Button [PWRF] Jan 28 00:59:21.631217 systemd-resolved[1338]: Defaulting to hostname 'linux'. Jan 28 00:59:21.637900 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:59:21.642046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:59:21.857149 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 00:59:21.859185 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 00:59:21.859553 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 00:59:21.860648 systemd-networkd[1393]: lo: Link UP Jan 28 00:59:21.860660 systemd-networkd[1393]: lo: Gained carrier Jan 28 00:59:21.867095 systemd-networkd[1393]: Enumeration completed Jan 28 00:59:21.867228 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:59:21.887276 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:59:21.887314 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:59:21.888703 systemd[1]: Reached target network.target - Network. Jan 28 00:59:21.892447 systemd-networkd[1393]: eth0: Link UP Jan 28 00:59:21.892484 systemd-networkd[1393]: eth0: Gained carrier Jan 28 00:59:21.892508 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:59:21.897695 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:59:22.034628 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 00:59:22.040718 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:59:22.059727 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.45/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 00:59:22.063314 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Jan 28 00:59:22.069811 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 00:59:22.070034 systemd-timesyncd[1403]: Initial clock synchronization to Wed 2026-01-28 00:59:22.328204 UTC. Jan 28 00:59:22.121975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:59:22.128751 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 28 00:59:22.137622 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:59:22.175205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:59:22.205314 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:59:22.464457 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:59:22.495772 kernel: kvm_amd: TSC scaling supported Jan 28 00:59:22.495854 kernel: kvm_amd: Nested Virtualization enabled Jan 28 00:59:22.495898 kernel: kvm_amd: Nested Paging enabled Jan 28 00:59:22.498066 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 00:59:22.500921 kernel: kvm_amd: PMU virtualization is disabled Jan 28 00:59:22.645476 kernel: EDAC MC: Ver: 3.0.0 Jan 28 00:59:22.743586 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 00:59:22.959037 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:59:23.014921 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 00:59:23.036768 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:59:23.079257 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 00:59:23.085194 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:59:23.089948 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:59:23.094336 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:59:23.099504 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:59:23.104905 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:59:23.108937 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:59:23.114037 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:59:23.118585 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:59:23.118644 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:59:23.121893 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:59:23.127670 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:59:23.135016 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:59:23.144992 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:59:23.152572 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 00:59:23.158994 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:59:23.164897 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:59:23.169650 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:59:23.181743 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:59:23.182146 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:59:23.184921 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:59:23.188163 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:59:23.191731 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:59:23.197662 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:59:23.208000 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:59:23.213627 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:59:23.215778 jq[1434]: false Jan 28 00:59:23.217689 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:59:23.225110 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:59:23.233804 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:59:23.245262 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:59:23.253639 extend-filesystems[1435]: Found loop3 Jan 28 00:59:23.253639 extend-filesystems[1435]: Found loop4 Jan 28 00:59:23.253639 extend-filesystems[1435]: Found loop5 Jan 28 00:59:23.253639 extend-filesystems[1435]: Found sr0 Jan 28 00:59:23.253639 extend-filesystems[1435]: Found vda Jan 28 00:59:23.253639 extend-filesystems[1435]: Found vda1 Jan 28 00:59:23.253639 extend-filesystems[1435]: Found vda2 Jan 28 00:59:23.253639 extend-filesystems[1435]: Found vda3 Jan 28 00:59:23.291227 extend-filesystems[1435]: Found usr Jan 28 00:59:23.291227 extend-filesystems[1435]: Found vda4 Jan 28 00:59:23.291227 extend-filesystems[1435]: Found vda6 Jan 28 00:59:23.291227 extend-filesystems[1435]: Found vda7 Jan 28 00:59:23.291227 extend-filesystems[1435]: Found vda9 Jan 28 00:59:23.291227 extend-filesystems[1435]: Checking size of /dev/vda9 Jan 28 00:59:23.267801 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:59:23.263622 dbus-daemon[1433]: [system] SELinux support is enabled Jan 28 00:59:23.269584 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:59:23.270365 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:59:23.274759 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:59:23.301066 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:59:23.315869 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:59:23.324685 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 00:59:23.329290 jq[1453]: true Jan 28 00:59:23.333286 update_engine[1449]: I20260128 00:59:23.333011 1449 main.cc:92] Flatcar Update Engine starting Jan 28 00:59:23.340319 update_engine[1449]: I20260128 00:59:23.336411 1449 update_check_scheduler.cc:74] Next update check in 3m32s Jan 28 00:59:23.339071 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:59:23.341067 extend-filesystems[1435]: Resized partition /dev/vda9 Jan 28 00:59:23.339320 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:59:23.340246 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:59:23.340960 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:59:23.360877 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:59:23.362803 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:59:23.368250 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Jan 28 00:59:23.393163 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 00:59:23.454661 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 00:59:23.459449 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:59:23.464570 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:59:23.464686 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:59:23.539777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1382) Jan 28 00:59:23.540068 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 00:59:23.540098 jq[1460]: true Jan 28 00:59:23.540496 tar[1458]: linux-amd64/LICENSE Jan 28 00:59:23.479800 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:59:23.552038 tar[1458]: linux-amd64/helm Jan 28 00:59:23.479886 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:59:23.552164 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 00:59:23.552164 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 00:59:23.552164 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 00:59:23.614316 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jan 28 00:59:23.553733 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:59:23.554078 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:59:23.615211 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:59:23.660778 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:59:23.669504 systemd-networkd[1393]: eth0: Gained IPv6LL Jan 28 00:59:23.702657 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 28 00:59:23.702705 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 00:59:23.703992 systemd-logind[1447]: New seat seat0. Jan 28 00:59:23.723029 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:59:23.751837 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:59:23.775817 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:59:23.835344 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 00:59:23.856852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:23.869646 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:59:23.874999 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:59:23.883786 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:59:23.891445 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:59:23.938362 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:59:23.942083 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 00:59:24.326253 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:59:24.328458 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:59:24.505539 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:59:24.513199 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 00:59:24.532551 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 00:59:24.541594 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:59:24.556835 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:59:24.575433 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:59:25.055174 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:59:25.071911 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:59:25.081737 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 00:59:25.086894 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:59:26.432496 containerd[1461]: time="2026-01-28T00:59:26.428714549Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 00:59:26.575542 containerd[1461]: time="2026-01-28T00:59:26.575081941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:59:26.579773 containerd[1461]: time="2026-01-28T00:59:26.579656946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:59:26.579773 containerd[1461]: time="2026-01-28T00:59:26.579759046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 00:59:26.579835 containerd[1461]: time="2026-01-28T00:59:26.579802700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 00:59:26.580084 containerd[1461]: time="2026-01-28T00:59:26.580037935Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 00:59:26.580084 containerd[1461]: time="2026-01-28T00:59:26.580075294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 00:59:26.580272 containerd[1461]: time="2026-01-28T00:59:26.580239730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:59:26.580294 containerd[1461]: time="2026-01-28T00:59:26.580270856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:59:26.580651 containerd[1461]: time="2026-01-28T00:59:26.580601608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:59:26.580651 containerd[1461]: time="2026-01-28T00:59:26.580638610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 00:59:26.580695 containerd[1461]: time="2026-01-28T00:59:26.580652950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:59:26.580695 containerd[1461]: time="2026-01-28T00:59:26.580663811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 00:59:26.580829 containerd[1461]: time="2026-01-28T00:59:26.580801346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:59:26.581481 containerd[1461]: time="2026-01-28T00:59:26.581358654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:59:26.581791 containerd[1461]: time="2026-01-28T00:59:26.581731710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:59:26.581791 containerd[1461]: time="2026-01-28T00:59:26.581767923Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 00:59:26.581902 containerd[1461]: time="2026-01-28T00:59:26.581865253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 00:59:26.582000 containerd[1461]: time="2026-01-28T00:59:26.581966933Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:59:26.592159 containerd[1461]: time="2026-01-28T00:59:26.592012239Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 00:59:26.592322 containerd[1461]: time="2026-01-28T00:59:26.592261189Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 00:59:26.592450 containerd[1461]: time="2026-01-28T00:59:26.592436985Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 00:59:26.592504 containerd[1461]: time="2026-01-28T00:59:26.592492309Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 00:59:26.592595 containerd[1461]: time="2026-01-28T00:59:26.592581276Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 00:59:26.593087 containerd[1461]: time="2026-01-28T00:59:26.592999062Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 00:59:26.593642 containerd[1461]: time="2026-01-28T00:59:26.593622256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.593993039Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594014054Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594029417Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594042764Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594077893Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594090821Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594103492Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594115765Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594127812Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594138979Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594170075Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594205879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594219830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.594740 containerd[1461]: time="2026-01-28T00:59:26.594231110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594242686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594254784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594266125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594276842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594287763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594336352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594425340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594440007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594450898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594479957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594511360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594562384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594575239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.595039 containerd[1461]: time="2026-01-28T00:59:26.594584820Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 00:59:26.595379 containerd[1461]: time="2026-01-28T00:59:26.595288988Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 00:59:26.595959 containerd[1461]: time="2026-01-28T00:59:26.595907709Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 00:59:26.596055 containerd[1461]: time="2026-01-28T00:59:26.596004578Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 00:59:26.596111 containerd[1461]: time="2026-01-28T00:59:26.596097558Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 00:59:26.596151 containerd[1461]: time="2026-01-28T00:59:26.596140845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.596235 containerd[1461]: time="2026-01-28T00:59:26.596219966Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 00:59:26.596323 containerd[1461]: time="2026-01-28T00:59:26.596310960Z" level=info msg="NRI interface is disabled by configuration." Jan 28 00:59:26.596515 containerd[1461]: time="2026-01-28T00:59:26.596495979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 00:59:26.597415 containerd[1461]: time="2026-01-28T00:59:26.597283494Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 00:59:26.599392 containerd[1461]: time="2026-01-28T00:59:26.598319070Z" level=info msg="Connect containerd service" Jan 28 00:59:26.599392 containerd[1461]: time="2026-01-28T00:59:26.598458223Z" level=info msg="using legacy CRI server" Jan 28 00:59:26.599392 containerd[1461]: time="2026-01-28T00:59:26.598471068Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:59:26.599392 containerd[1461]: time="2026-01-28T00:59:26.598819621Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 00:59:26.600826 containerd[1461]: time="2026-01-28T00:59:26.600474869Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:59:26.601627 containerd[1461]: time="2026-01-28T00:59:26.601448254Z" level=info msg="Start subscribing containerd event" Jan 28 00:59:26.601784 containerd[1461]: time="2026-01-28T00:59:26.601759590Z" level=info msg="Start recovering state" Jan 28 00:59:26.602050 containerd[1461]: time="2026-01-28T00:59:26.601762127Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:59:26.602562 containerd[1461]: time="2026-01-28T00:59:26.602509509Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:59:26.602639 containerd[1461]: time="2026-01-28T00:59:26.602622440Z" level=info msg="Start event monitor" Jan 28 00:59:26.604541 containerd[1461]: time="2026-01-28T00:59:26.603485392Z" level=info msg="Start snapshots syncer" Jan 28 00:59:26.604541 containerd[1461]: time="2026-01-28T00:59:26.603542946Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:59:26.604541 containerd[1461]: time="2026-01-28T00:59:26.603559836Z" level=info msg="Start streaming server" Jan 28 00:59:26.604308 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:59:26.605645 containerd[1461]: time="2026-01-28T00:59:26.605621651Z" level=info msg="containerd successfully booted in 0.185115s" Jan 28 00:59:26.916677 tar[1458]: linux-amd64/README.md Jan 28 00:59:27.032309 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:59:29.913198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:29.941158 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:59:29.945892 systemd[1]: Startup finished in 2.507s (kernel) + 17.023s (initrd) + 17.099s (userspace) = 36.630s. Jan 28 00:59:29.960227 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:59:32.330501 kubelet[1546]: E0128 00:59:32.330118 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:59:32.334603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:59:32.334825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:59:32.335460 systemd[1]: kubelet.service: Consumed 7.741s CPU time. Jan 28 00:59:32.888173 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:59:32.889831 systemd[1]: Started sshd@0-10.0.0.45:22-10.0.0.1:50324.service - OpenSSH per-connection server daemon (10.0.0.1:50324). Jan 28 00:59:32.972792 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 50324 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:32.975787 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:32.993125 systemd-logind[1447]: New session 1 of user core. Jan 28 00:59:32.994908 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:59:33.010920 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:59:33.026903 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:59:33.041747 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:59:33.045753 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 00:59:33.202482 systemd[1563]: Queued start job for default target default.target. Jan 28 00:59:33.212240 systemd[1563]: Created slice app.slice - User Application Slice. Jan 28 00:59:33.212292 systemd[1563]: Reached target paths.target - Paths. Jan 28 00:59:33.212305 systemd[1563]: Reached target timers.target - Timers. Jan 28 00:59:33.218301 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:59:33.242237 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:59:33.242518 systemd[1563]: Reached target sockets.target - Sockets. Jan 28 00:59:33.242542 systemd[1563]: Reached target basic.target - Basic System. Jan 28 00:59:33.242639 systemd[1563]: Reached target default.target - Main User Target. Jan 28 00:59:33.242683 systemd[1563]: Startup finished in 181ms. Jan 28 00:59:33.243319 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:59:33.264407 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:59:33.361222 systemd[1]: Started sshd@1-10.0.0.45:22-10.0.0.1:50340.service - OpenSSH per-connection server daemon (10.0.0.1:50340). Jan 28 00:59:33.423964 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 50340 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:33.425786 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:33.432560 systemd-logind[1447]: New session 2 of user core. Jan 28 00:59:33.439641 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 00:59:33.501877 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:33.515852 systemd[1]: sshd@1-10.0.0.45:22-10.0.0.1:50340.service: Deactivated successfully. Jan 28 00:59:33.518509 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 00:59:33.520709 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Jan 28 00:59:33.531140 systemd[1]: Started sshd@2-10.0.0.45:22-10.0.0.1:50344.service - OpenSSH per-connection server daemon (10.0.0.1:50344). Jan 28 00:59:33.533143 systemd-logind[1447]: Removed session 2. Jan 28 00:59:33.564970 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 50344 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:33.566709 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:33.572262 systemd-logind[1447]: New session 3 of user core. Jan 28 00:59:33.582728 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:59:33.638538 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:33.656893 systemd[1]: sshd@2-10.0.0.45:22-10.0.0.1:50344.service: Deactivated successfully. Jan 28 00:59:33.667555 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 00:59:33.670188 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Jan 28 00:59:33.688183 systemd[1]: Started sshd@3-10.0.0.45:22-10.0.0.1:50354.service - OpenSSH per-connection server daemon (10.0.0.1:50354). Jan 28 00:59:33.690334 systemd-logind[1447]: Removed session 3. Jan 28 00:59:33.721465 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 50354 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:33.723772 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:33.730543 systemd-logind[1447]: New session 4 of user core. Jan 28 00:59:33.748784 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:59:33.814304 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:33.826655 systemd[1]: sshd@3-10.0.0.45:22-10.0.0.1:50354.service: Deactivated successfully. Jan 28 00:59:33.828548 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 00:59:33.830256 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 28 00:59:33.837705 systemd[1]: Started sshd@4-10.0.0.45:22-10.0.0.1:50370.service - OpenSSH per-connection server daemon (10.0.0.1:50370). Jan 28 00:59:33.838948 systemd-logind[1447]: Removed session 4. Jan 28 00:59:33.877407 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 50370 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:33.879144 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:33.884703 systemd-logind[1447]: New session 5 of user core. Jan 28 00:59:33.894573 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:59:33.965991 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:59:33.966530 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:59:33.995587 sudo[1599]: pam_unix(sudo:session): session closed for user root Jan 28 00:59:33.998084 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:34.016752 systemd[1]: sshd@4-10.0.0.45:22-10.0.0.1:50370.service: Deactivated successfully. Jan 28 00:59:34.018674 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 00:59:34.020194 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 28 00:59:34.027773 systemd[1]: Started sshd@5-10.0.0.45:22-10.0.0.1:50386.service - OpenSSH per-connection server daemon (10.0.0.1:50386). Jan 28 00:59:34.028974 systemd-logind[1447]: Removed session 5. Jan 28 00:59:34.061863 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 50386 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:34.064053 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:34.069984 systemd-logind[1447]: New session 6 of user core. Jan 28 00:59:34.079632 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:59:34.138453 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:59:34.138806 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:59:34.144606 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 28 00:59:34.152585 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 00:59:34.153014 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:59:34.175724 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 00:59:34.178147 auditctl[1611]: No rules Jan 28 00:59:34.178621 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:59:34.178873 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 00:59:34.181802 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:59:34.220568 augenrules[1629]: No rules Jan 28 00:59:34.222536 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:59:34.224077 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 28 00:59:34.226677 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:34.242472 systemd[1]: sshd@5-10.0.0.45:22-10.0.0.1:50386.service: Deactivated successfully. Jan 28 00:59:34.244534 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:59:34.246540 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:59:34.256171 systemd[1]: Started sshd@6-10.0.0.45:22-10.0.0.1:50400.service - OpenSSH per-connection server daemon (10.0.0.1:50400). Jan 28 00:59:34.257700 systemd-logind[1447]: Removed session 6. Jan 28 00:59:34.286074 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 50400 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 00:59:34.287990 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:34.293602 systemd-logind[1447]: New session 7 of user core. Jan 28 00:59:34.304638 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:59:34.363765 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:59:34.364265 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:59:35.967858 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:59:35.968050 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:59:36.352677 dockerd[1659]: time="2026-01-28T00:59:36.350146630Z" level=info msg="Starting up" Jan 28 00:59:36.497628 dockerd[1659]: time="2026-01-28T00:59:36.497484715Z" level=info msg="Loading containers: start." Jan 28 00:59:36.684437 kernel: Initializing XFRM netlink socket Jan 28 00:59:36.852828 systemd-networkd[1393]: docker0: Link UP Jan 28 00:59:36.885266 dockerd[1659]: time="2026-01-28T00:59:36.885173386Z" level=info msg="Loading containers: done." Jan 28 00:59:36.930081 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2498581513-merged.mount: Deactivated successfully. Jan 28 00:59:36.937971 dockerd[1659]: time="2026-01-28T00:59:36.937852441Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:59:36.938105 dockerd[1659]: time="2026-01-28T00:59:36.937993955Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 00:59:36.938406 dockerd[1659]: time="2026-01-28T00:59:36.938228126Z" level=info msg="Daemon has completed initialization" Jan 28 00:59:36.998589 dockerd[1659]: time="2026-01-28T00:59:36.998487848Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:59:36.998820 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:59:38.263104 containerd[1461]: time="2026-01-28T00:59:38.262905214Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 00:59:39.143975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089455928.mount: Deactivated successfully. Jan 28 00:59:42.833988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:59:42.929718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:43.563934 containerd[1461]: time="2026-01-28T00:59:43.563777220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:43.582130 containerd[1461]: time="2026-01-28T00:59:43.581865765Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 28 00:59:43.611098 containerd[1461]: time="2026-01-28T00:59:43.611028048Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:43.614911 containerd[1461]: time="2026-01-28T00:59:43.614802120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:43.617558 containerd[1461]: time="2026-01-28T00:59:43.617456697Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 5.354451652s" Jan 28 00:59:43.617558 containerd[1461]: time="2026-01-28T00:59:43.617514675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 00:59:43.619869 containerd[1461]: time="2026-01-28T00:59:43.619797612Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 00:59:44.348543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:44.354127 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:59:44.531845 kubelet[1876]: E0128 00:59:44.531698 1876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:59:44.537312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:59:44.537569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:59:44.537977 systemd[1]: kubelet.service: Consumed 1.518s CPU time. Jan 28 00:59:47.165946 containerd[1461]: time="2026-01-28T00:59:47.165313025Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 28 00:59:47.169588 containerd[1461]: time="2026-01-28T00:59:47.165472085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:47.175050 containerd[1461]: time="2026-01-28T00:59:47.174769305Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:47.188890 containerd[1461]: time="2026-01-28T00:59:47.188453696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:47.220551 containerd[1461]: time="2026-01-28T00:59:47.220177553Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.600323157s" Jan 28 00:59:47.220551 containerd[1461]: time="2026-01-28T00:59:47.220467624Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 00:59:47.224975 containerd[1461]: time="2026-01-28T00:59:47.224863618Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 00:59:49.699501 containerd[1461]: time="2026-01-28T00:59:49.699229647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:49.700783 containerd[1461]: time="2026-01-28T00:59:49.700226168Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 28 00:59:49.701832 containerd[1461]: time="2026-01-28T00:59:49.701772528Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:49.705698 containerd[1461]: time="2026-01-28T00:59:49.705625639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:49.707277 containerd[1461]: time="2026-01-28T00:59:49.707201060Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.482187731s" Jan 28 00:59:49.707277 containerd[1461]: time="2026-01-28T00:59:49.707257713Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 00:59:49.708652 containerd[1461]: time="2026-01-28T00:59:49.708572220Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 00:59:51.958129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709960768.mount: Deactivated successfully. Jan 28 00:59:53.364609 containerd[1461]: time="2026-01-28T00:59:53.364423962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:53.365967 containerd[1461]: time="2026-01-28T00:59:53.365115846Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 28 00:59:53.366705 containerd[1461]: time="2026-01-28T00:59:53.366598756Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:53.370802 containerd[1461]: time="2026-01-28T00:59:53.370720718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:53.371917 containerd[1461]: time="2026-01-28T00:59:53.371824704Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.663039859s" Jan 28 00:59:53.371917 containerd[1461]: time="2026-01-28T00:59:53.371897794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 00:59:53.373887 containerd[1461]: time="2026-01-28T00:59:53.373622594Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 00:59:54.240769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341835763.mount: Deactivated successfully. Jan 28 00:59:54.573156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:59:54.664141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:55.471775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:55.489234 (kubelet)[1920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:59:56.078826 kubelet[1920]: E0128 00:59:56.078475 1920 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:59:56.083489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:59:56.083764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:59:56.084326 systemd[1]: kubelet.service: Consumed 1.391s CPU time. Jan 28 00:59:58.197234 containerd[1461]: time="2026-01-28T00:59:58.196994292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:58.198646 containerd[1461]: time="2026-01-28T00:59:58.197846832Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 28 00:59:58.199729 containerd[1461]: time="2026-01-28T00:59:58.199656244Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:58.204111 containerd[1461]: time="2026-01-28T00:59:58.204033409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:58.205801 containerd[1461]: time="2026-01-28T00:59:58.205674595Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.8320181s" Jan 28 00:59:58.205801 containerd[1461]: time="2026-01-28T00:59:58.205737469Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 00:59:58.206978 containerd[1461]: time="2026-01-28T00:59:58.206904087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 00:59:58.807879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601505538.mount: Deactivated successfully. Jan 28 00:59:58.815960 containerd[1461]: time="2026-01-28T00:59:58.815831239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:58.817186 containerd[1461]: time="2026-01-28T00:59:58.816946286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 28 00:59:58.834168 containerd[1461]: time="2026-01-28T00:59:58.834088168Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:58.839198 containerd[1461]: time="2026-01-28T00:59:58.839088954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:58.840257 containerd[1461]: time="2026-01-28T00:59:58.840215589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 633.284229ms" Jan 28 00:59:58.840493 containerd[1461]: time="2026-01-28T00:59:58.840261804Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 00:59:58.841872 containerd[1461]: time="2026-01-28T00:59:58.841808398Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 00:59:59.363875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount102169599.mount: Deactivated successfully. Jan 28 01:00:05.525685 containerd[1461]: time="2026-01-28T01:00:05.525208335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:05.534269 containerd[1461]: time="2026-01-28T01:00:05.530694001Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 28 01:00:05.548620 containerd[1461]: time="2026-01-28T01:00:05.548528718Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:05.557136 containerd[1461]: time="2026-01-28T01:00:05.556978907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:05.559247 containerd[1461]: time="2026-01-28T01:00:05.559194904Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.717337364s" Jan 28 01:00:05.559540 containerd[1461]: time="2026-01-28T01:00:05.559465677Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 01:00:06.306016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:00:06.317682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:00:06.697702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:00:06.701712 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:00:06.934004 kubelet[2056]: E0128 01:00:06.933688 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:00:06.940009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:00:06.940329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:00:08.349478 update_engine[1449]: I20260128 01:00:08.348850 1449 update_attempter.cc:509] Updating boot flags... Jan 28 01:00:08.408432 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2073) Jan 28 01:00:08.469414 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2072) Jan 28 01:00:08.508447 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2072) Jan 28 01:00:08.919564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:00:09.022134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:00:09.073886 systemd[1]: Reloading requested from client PID 2088 ('systemctl') (unit session-7.scope)... Jan 28 01:00:09.073952 systemd[1]: Reloading... Jan 28 01:00:09.371450 zram_generator::config[2157]: No configuration found. Jan 28 01:00:09.516013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:00:09.671279 systemd[1]: Reloading finished in 596 ms. Jan 28 01:00:09.745643 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:00:09.745805 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:00:09.746278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:00:09.750155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:00:09.941759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:00:09.959787 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:00:10.065875 kubelet[2174]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:00:10.065875 kubelet[2174]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:00:10.065875 kubelet[2174]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:00:10.066638 kubelet[2174]: I0128 01:00:10.065963 2174 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:00:10.651158 kubelet[2174]: I0128 01:00:10.651038 2174 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:00:10.651158 kubelet[2174]: I0128 01:00:10.651147 2174 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:00:10.652094 kubelet[2174]: I0128 01:00:10.652029 2174 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:00:10.683229 kubelet[2174]: E0128 01:00:10.683161 2174 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:10.685151 kubelet[2174]: I0128 01:00:10.683765 2174 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:00:10.700750 kubelet[2174]: E0128 01:00:10.700646 2174 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:00:10.700750 kubelet[2174]: I0128 01:00:10.700736 2174 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:00:10.712567 kubelet[2174]: I0128 01:00:10.710498 2174 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:00:10.712567 kubelet[2174]: I0128 01:00:10.712521 2174 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:00:10.713156 kubelet[2174]: I0128 01:00:10.712591 2174 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:00:10.713566 kubelet[2174]: I0128 01:00:10.713172 2174 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:00:10.713566 kubelet[2174]: I0128 01:00:10.713184 2174 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:00:10.713702 kubelet[2174]: I0128 01:00:10.713582 2174 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:00:10.718613 kubelet[2174]: I0128 01:00:10.718262 2174 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:00:10.718613 kubelet[2174]: I0128 01:00:10.718422 2174 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:00:10.718613 kubelet[2174]: I0128 01:00:10.718517 2174 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:00:10.718613 kubelet[2174]: I0128 01:00:10.718547 2174 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:00:10.721282 kubelet[2174]: W0128 01:00:10.721068 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jan 28 01:00:10.721282 kubelet[2174]: W0128 01:00:10.721166 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jan 28 01:00:10.721282 kubelet[2174]: E0128 01:00:10.721207 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:10.721282 kubelet[2174]: E0128 01:00:10.721216 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:10.722895 kubelet[2174]: I0128 01:00:10.722843 2174 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:00:10.724941 kubelet[2174]: I0128 01:00:10.724743 2174 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:00:10.726697 kubelet[2174]: W0128 01:00:10.726517 2174 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:00:10.736182 kubelet[2174]: I0128 01:00:10.736106 2174 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:00:10.737912 kubelet[2174]: I0128 01:00:10.736343 2174 server.go:1287] "Started kubelet" Jan 28 01:00:10.738644 kubelet[2174]: I0128 01:00:10.738498 2174 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:00:10.739008 kubelet[2174]: I0128 01:00:10.738942 2174 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:00:10.739619 kubelet[2174]: I0128 01:00:10.739593 2174 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:00:10.740662 kubelet[2174]: I0128 01:00:10.740631 2174 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:00:10.743838 kubelet[2174]: I0128 01:00:10.743814 2174 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:00:10.744457 kubelet[2174]: I0128 01:00:10.744015 2174 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:00:10.745959 kubelet[2174]: E0128 01:00:10.745826 2174 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:00:10.746159 kubelet[2174]: I0128 01:00:10.746145 2174 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:00:10.746677 kubelet[2174]: I0128 01:00:10.746662 2174 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:00:10.746911 kubelet[2174]: I0128 01:00:10.746899 2174 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:00:10.747425 kubelet[2174]: W0128 01:00:10.747391 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jan 28 01:00:10.747500 kubelet[2174]: E0128 01:00:10.747484 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:10.748731 kubelet[2174]: E0128 01:00:10.745158 2174 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ebf45f1f54e80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:00:10.736217728 +0000 UTC m=+0.750372420,LastTimestamp:2026-01-28 01:00:10.736217728 +0000 UTC m=+0.750372420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:00:10.748844 kubelet[2174]: I0128 01:00:10.748726 2174 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:00:10.748903 kubelet[2174]: I0128 01:00:10.748866 2174 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:00:10.757229 kubelet[2174]: E0128 01:00:10.757203 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="200ms" Jan 28 01:00:10.758282 kubelet[2174]: I0128 01:00:10.758263 2174 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:00:10.773880 kubelet[2174]: E0128 01:00:10.773857 2174 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:00:10.776234 kubelet[2174]: I0128 01:00:10.776144 2174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:00:10.778225 kubelet[2174]: I0128 01:00:10.778168 2174 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:00:10.778420 kubelet[2174]: I0128 01:00:10.778387 2174 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:00:10.779437 kubelet[2174]: I0128 01:00:10.778870 2174 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:00:10.779437 kubelet[2174]: I0128 01:00:10.778910 2174 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:00:10.779437 kubelet[2174]: E0128 01:00:10.779025 2174 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:00:10.780275 kubelet[2174]: W0128 01:00:10.780240 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jan 28 01:00:10.780727 kubelet[2174]: E0128 01:00:10.780432 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:10.787883 kubelet[2174]: I0128 01:00:10.787825 2174 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:00:10.787883 kubelet[2174]: I0128 01:00:10.787858 2174 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:00:10.787883 kubelet[2174]: I0128 01:00:10.787890 2174 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:00:10.846565 kubelet[2174]: E0128 01:00:10.846454 2174 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:00:10.880220 kubelet[2174]: E0128 01:00:10.880084 2174 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 01:00:10.903786 kubelet[2174]: I0128 01:00:10.903632 2174 policy_none.go:49] "None policy: Start" Jan 28 01:00:10.903786 kubelet[2174]: I0128 01:00:10.903755 2174 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:00:10.903940 kubelet[2174]: I0128 01:00:10.903834 2174 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:00:10.913520 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:00:10.936675 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:00:10.941573 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:00:10.946728 kubelet[2174]: E0128 01:00:10.946660 2174 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:00:10.955342 kubelet[2174]: I0128 01:00:10.955271 2174 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:00:10.955904 kubelet[2174]: I0128 01:00:10.955857 2174 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:00:10.956012 kubelet[2174]: I0128 01:00:10.955924 2174 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:00:10.957265 kubelet[2174]: I0128 01:00:10.957237 2174 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:00:10.957895 kubelet[2174]: E0128 01:00:10.957759 2174 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:00:10.957930 kubelet[2174]: E0128 01:00:10.957888 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="400ms" Jan 28 01:00:10.958005 kubelet[2174]: E0128 01:00:10.957930 2174 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:00:11.088125 kubelet[2174]: I0128 01:00:11.087824 2174 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:00:11.089858 kubelet[2174]: E0128 01:00:11.088828 2174 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jan 28 01:00:11.137926 systemd[1]: Created slice kubepods-burstable-pod4f232a0ddddc3a54d662046f5e309cdb.slice - libcontainer container kubepods-burstable-pod4f232a0ddddc3a54d662046f5e309cdb.slice. Jan 28 01:00:11.149130 kubelet[2174]: I0128 01:00:11.149020 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:11.149130 kubelet[2174]: I0128 01:00:11.149095 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:11.149310 kubelet[2174]: I0128 01:00:11.149157 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f232a0ddddc3a54d662046f5e309cdb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f232a0ddddc3a54d662046f5e309cdb\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:11.149310 kubelet[2174]: I0128 01:00:11.149186 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f232a0ddddc3a54d662046f5e309cdb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f232a0ddddc3a54d662046f5e309cdb\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:11.149310 kubelet[2174]: I0128 01:00:11.149216 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:11.149310 kubelet[2174]: I0128 01:00:11.149242 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:11.149603 kubelet[2174]: I0128 01:00:11.149313 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:11.149603 kubelet[2174]: I0128 01:00:11.149428 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:00:11.149603 kubelet[2174]: I0128 01:00:11.149456 2174 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f232a0ddddc3a54d662046f5e309cdb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f232a0ddddc3a54d662046f5e309cdb\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:11.151451 kubelet[2174]: E0128 01:00:11.151269 2174 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:00:11.155005 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 28 01:00:11.157894 kubelet[2174]: E0128 01:00:11.157843 2174 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:00:11.160329 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 28 01:00:11.163557 kubelet[2174]: E0128 01:00:11.163338 2174 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:00:11.293899 kubelet[2174]: I0128 01:00:11.293824 2174 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:00:11.294614 kubelet[2174]: E0128 01:00:11.294558 2174 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jan 28 01:00:11.359758 kubelet[2174]: E0128 01:00:11.359659 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="800ms" Jan 28 01:00:11.453843 kubelet[2174]: E0128 01:00:11.453608 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:11.455991 containerd[1461]: time="2026-01-28T01:00:11.455900913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f232a0ddddc3a54d662046f5e309cdb,Namespace:kube-system,Attempt:0,}" Jan 28 01:00:11.458992 kubelet[2174]: E0128 01:00:11.458921 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:11.459668 containerd[1461]: time="2026-01-28T01:00:11.459625918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 01:00:11.464452 kubelet[2174]: E0128 01:00:11.464296 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:11.465234 containerd[1461]: time="2026-01-28T01:00:11.465177611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 01:00:11.674228 kubelet[2174]: W0128 01:00:11.674087 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jan 28 01:00:11.674228 kubelet[2174]: E0128 01:00:11.674247 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:11.697560 kubelet[2174]: I0128 01:00:11.697429 2174 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:00:11.698208 kubelet[2174]: E0128 01:00:11.698143 2174 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jan 28 01:00:11.943939 kubelet[2174]: W0128 01:00:11.943469 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jan 28 01:00:11.944957 kubelet[2174]: E0128 01:00:11.943994 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:12.062969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615019531.mount: Deactivated successfully. Jan 28 01:00:12.063465 kubelet[2174]: W0128 01:00:12.063240 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jan 28 01:00:12.063568 kubelet[2174]: E0128 01:00:12.063498 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:12.076464 containerd[1461]: time="2026-01-28T01:00:12.076192317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:00:12.079648 containerd[1461]: time="2026-01-28T01:00:12.079496578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 28 01:00:12.081227 containerd[1461]: time="2026-01-28T01:00:12.081155347Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:00:12.086135 containerd[1461]: time="2026-01-28T01:00:12.086022183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:00:12.087213 containerd[1461]: time="2026-01-28T01:00:12.087080615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:00:12.088382 containerd[1461]: time="2026-01-28T01:00:12.088303064Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:00:12.089782 containerd[1461]: time="2026-01-28T01:00:12.089701358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 01:00:12.093168 containerd[1461]: time="2026-01-28T01:00:12.093092640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:00:12.109083 containerd[1461]: time="2026-01-28T01:00:12.108825659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 649.029436ms" Jan 28 01:00:12.112839 containerd[1461]: time="2026-01-28T01:00:12.112707228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 647.432035ms" Jan 28 01:00:12.118880 containerd[1461]: time="2026-01-28T01:00:12.118484558Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 662.418552ms" Jan 28 01:00:12.119731 kubelet[2174]: W0128 01:00:12.119573 2174 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.45:6443: connect: connection refused Jan 28 01:00:12.119731 kubelet[2174]: E0128 01:00:12.119729 2174 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:12.165889 kubelet[2174]: E0128 01:00:12.164898 2174 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.45:6443: connect: connection refused" interval="1.6s" Jan 28 01:00:12.509437 kubelet[2174]: I0128 01:00:12.509112 2174 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:00:12.510140 kubelet[2174]: E0128 01:00:12.510067 2174 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.45:6443/api/v1/nodes\": dial tcp 10.0.0.45:6443: connect: connection refused" node="localhost" Jan 28 01:00:12.674964 containerd[1461]: time="2026-01-28T01:00:12.670109307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:12.674964 containerd[1461]: time="2026-01-28T01:00:12.670585001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:12.674964 containerd[1461]: time="2026-01-28T01:00:12.670606766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:12.674964 containerd[1461]: time="2026-01-28T01:00:12.670963625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:12.677123 containerd[1461]: time="2026-01-28T01:00:12.664789737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:12.677123 containerd[1461]: time="2026-01-28T01:00:12.670226027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:12.677123 containerd[1461]: time="2026-01-28T01:00:12.670238402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:12.677123 containerd[1461]: time="2026-01-28T01:00:12.671438986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:12.748510 containerd[1461]: time="2026-01-28T01:00:12.718493888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:12.748510 containerd[1461]: time="2026-01-28T01:00:12.741612847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:12.748510 containerd[1461]: time="2026-01-28T01:00:12.741631726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:12.748510 containerd[1461]: time="2026-01-28T01:00:12.742477657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:12.792626 systemd[1]: Started cri-containerd-3ece0e8a3736e01b71ae44a164b52dcd0ef58881784bbe67125d6fc0ef44fd65.scope - libcontainer container 3ece0e8a3736e01b71ae44a164b52dcd0ef58881784bbe67125d6fc0ef44fd65. Jan 28 01:00:12.813514 systemd[1]: Started cri-containerd-ee1f9d6a754adafe51e87cd91271c8444ff2999d54f53aede68716faf0743fff.scope - libcontainer container ee1f9d6a754adafe51e87cd91271c8444ff2999d54f53aede68716faf0743fff. Jan 28 01:00:12.822445 kubelet[2174]: E0128 01:00:12.819995 2174 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.45:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:00:12.827932 systemd[1]: Started cri-containerd-75c1976c74b54808be605592bec014b392bb21bb33bf90ac3dcd6525388ee6d7.scope - libcontainer container 75c1976c74b54808be605592bec014b392bb21bb33bf90ac3dcd6525388ee6d7. Jan 28 01:00:13.104602 containerd[1461]: time="2026-01-28T01:00:13.104510551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4f232a0ddddc3a54d662046f5e309cdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ece0e8a3736e01b71ae44a164b52dcd0ef58881784bbe67125d6fc0ef44fd65\"" Jan 28 01:00:13.130976 containerd[1461]: time="2026-01-28T01:00:13.130662787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"75c1976c74b54808be605592bec014b392bb21bb33bf90ac3dcd6525388ee6d7\"" Jan 28 01:00:13.134057 kubelet[2174]: E0128 01:00:13.133933 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:13.134057 kubelet[2174]: E0128 01:00:13.134032 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:13.135310 containerd[1461]: time="2026-01-28T01:00:13.135191747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee1f9d6a754adafe51e87cd91271c8444ff2999d54f53aede68716faf0743fff\"" Jan 28 01:00:13.137213 kubelet[2174]: E0128 01:00:13.137157 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:13.137717 containerd[1461]: time="2026-01-28T01:00:13.137662006Z" level=info msg="CreateContainer within sandbox \"3ece0e8a3736e01b71ae44a164b52dcd0ef58881784bbe67125d6fc0ef44fd65\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:00:13.138911 containerd[1461]: time="2026-01-28T01:00:13.138854432Z" level=info msg="CreateContainer within sandbox \"75c1976c74b54808be605592bec014b392bb21bb33bf90ac3dcd6525388ee6d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:00:13.140564 containerd[1461]: time="2026-01-28T01:00:13.140503650Z" level=info msg="CreateContainer within sandbox \"ee1f9d6a754adafe51e87cd91271c8444ff2999d54f53aede68716faf0743fff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:00:13.155137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773649353.mount: Deactivated successfully. Jan 28 01:00:13.164947 containerd[1461]: time="2026-01-28T01:00:13.164864408Z" level=info msg="CreateContainer within sandbox \"ee1f9d6a754adafe51e87cd91271c8444ff2999d54f53aede68716faf0743fff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ba54039aa07b8a5a0f0aceb6c68b754993bf7d74a9f2010c514c6f39cc88da63\"" Jan 28 01:00:13.166798 containerd[1461]: time="2026-01-28T01:00:13.166714259Z" level=info msg="StartContainer for \"ba54039aa07b8a5a0f0aceb6c68b754993bf7d74a9f2010c514c6f39cc88da63\"" Jan 28 01:00:13.176870 containerd[1461]: time="2026-01-28T01:00:13.176682968Z" level=info msg="CreateContainer within sandbox \"3ece0e8a3736e01b71ae44a164b52dcd0ef58881784bbe67125d6fc0ef44fd65\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"865ed027988b17971f043acbfed5af9148b4558335c288f3b4df3ceed6352dd5\"" Jan 28 01:00:13.177494 containerd[1461]: time="2026-01-28T01:00:13.177416937Z" level=info msg="StartContainer for \"865ed027988b17971f043acbfed5af9148b4558335c288f3b4df3ceed6352dd5\"" Jan 28 01:00:13.182517 containerd[1461]: time="2026-01-28T01:00:13.182453734Z" level=info msg="CreateContainer within sandbox \"75c1976c74b54808be605592bec014b392bb21bb33bf90ac3dcd6525388ee6d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2daa22dc59655d46f4a985bcb4129355bf4efae455ce2e0d2919bde0b51d388d\"" Jan 28 01:00:13.184393 containerd[1461]: time="2026-01-28T01:00:13.183186911Z" level=info msg="StartContainer for \"2daa22dc59655d46f4a985bcb4129355bf4efae455ce2e0d2919bde0b51d388d\"" Jan 28 01:00:13.220537 systemd[1]: Started cri-containerd-865ed027988b17971f043acbfed5af9148b4558335c288f3b4df3ceed6352dd5.scope - libcontainer container 865ed027988b17971f043acbfed5af9148b4558335c288f3b4df3ceed6352dd5. Jan 28 01:00:13.232597 systemd[1]: Started cri-containerd-2daa22dc59655d46f4a985bcb4129355bf4efae455ce2e0d2919bde0b51d388d.scope - libcontainer container 2daa22dc59655d46f4a985bcb4129355bf4efae455ce2e0d2919bde0b51d388d. Jan 28 01:00:13.244583 systemd[1]: Started cri-containerd-ba54039aa07b8a5a0f0aceb6c68b754993bf7d74a9f2010c514c6f39cc88da63.scope - libcontainer container ba54039aa07b8a5a0f0aceb6c68b754993bf7d74a9f2010c514c6f39cc88da63. Jan 28 01:00:13.316226 kubelet[2174]: E0128 01:00:13.315943 2174 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.45:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.45:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ebf45f1f54e80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:00:10.736217728 +0000 UTC m=+0.750372420,LastTimestamp:2026-01-28 01:00:10.736217728 +0000 UTC m=+0.750372420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:00:13.360690 containerd[1461]: time="2026-01-28T01:00:13.360556771Z" level=info msg="StartContainer for \"865ed027988b17971f043acbfed5af9148b4558335c288f3b4df3ceed6352dd5\" returns successfully" Jan 28 01:00:13.384530 containerd[1461]: time="2026-01-28T01:00:13.384492573Z" level=info msg="StartContainer for \"2daa22dc59655d46f4a985bcb4129355bf4efae455ce2e0d2919bde0b51d388d\" returns successfully" Jan 28 01:00:13.385158 containerd[1461]: time="2026-01-28T01:00:13.385132701Z" level=info msg="StartContainer for \"ba54039aa07b8a5a0f0aceb6c68b754993bf7d74a9f2010c514c6f39cc88da63\" returns successfully" Jan 28 01:00:13.829473 kubelet[2174]: E0128 01:00:13.829243 2174 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:00:13.829925 kubelet[2174]: E0128 01:00:13.829712 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:13.830994 kubelet[2174]: E0128 01:00:13.830464 2174 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:00:13.830994 kubelet[2174]: E0128 01:00:13.830640 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:13.842477 kubelet[2174]: E0128 01:00:13.842332 2174 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:00:13.842721 kubelet[2174]: E0128 01:00:13.842597 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:14.141922 kubelet[2174]: I0128 01:00:14.137257 2174 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:00:14.853197 kubelet[2174]: E0128 01:00:14.851729 2174 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:00:14.853197 kubelet[2174]: E0128 01:00:14.852168 2174 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:00:14.853197 kubelet[2174]: E0128 01:00:14.852209 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:14.853197 kubelet[2174]: E0128 01:00:14.852643 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:15.865256 kubelet[2174]: E0128 01:00:15.865144 2174 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:00:15.866641 kubelet[2174]: E0128 01:00:15.866519 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:16.317261 kubelet[2174]: E0128 01:00:16.317215 2174 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 01:00:16.413125 kubelet[2174]: I0128 01:00:16.412908 2174 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:00:16.413125 kubelet[2174]: E0128 01:00:16.412953 2174 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:00:16.448467 kubelet[2174]: I0128 01:00:16.448400 2174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:16.458786 kubelet[2174]: E0128 01:00:16.458581 2174 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:16.458786 kubelet[2174]: I0128 01:00:16.458607 2174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:16.463595 kubelet[2174]: E0128 01:00:16.463465 2174 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:16.463595 kubelet[2174]: I0128 01:00:16.463503 2174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:00:16.595159 kubelet[2174]: E0128 01:00:16.594583 2174 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 01:00:16.804309 kubelet[2174]: I0128 01:00:16.804169 2174 apiserver.go:52] "Watching apiserver" Jan 28 01:00:16.847966 kubelet[2174]: I0128 01:00:16.847555 2174 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:00:17.265161 kubelet[2174]: I0128 01:00:17.264934 2174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:17.269413 kubelet[2174]: E0128 01:00:17.269176 2174 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:17.269496 kubelet[2174]: E0128 01:00:17.269444 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:18.447723 kubelet[2174]: I0128 01:00:18.447446 2174 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:00:18.458012 kubelet[2174]: E0128 01:00:18.457957 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:18.878275 kubelet[2174]: E0128 01:00:18.878044 2174 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:18.909808 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Jan 28 01:00:18.909889 systemd[1]: Reloading... Jan 28 01:00:19.084504 zram_generator::config[2493]: No configuration found. Jan 28 01:00:19.320772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 01:00:19.417995 systemd[1]: Reloading finished in 507 ms. Jan 28 01:00:19.498805 kubelet[2174]: I0128 01:00:19.498677 2174 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:00:19.498764 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:00:19.516677 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:00:19.517033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:00:19.517151 systemd[1]: kubelet.service: Consumed 2.691s CPU time, 137.8M memory peak, 0B memory swap peak. Jan 28 01:00:19.525749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:00:19.716679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:00:19.723849 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:00:19.806678 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:00:19.806678 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:00:19.806678 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:00:19.807153 kubelet[2539]: I0128 01:00:19.806719 2539 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:00:19.816787 kubelet[2539]: I0128 01:00:19.816728 2539 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:00:19.816787 kubelet[2539]: I0128 01:00:19.816776 2539 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:00:19.817193 kubelet[2539]: I0128 01:00:19.817138 2539 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:00:19.818482 kubelet[2539]: I0128 01:00:19.818439 2539 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 01:00:19.822730 kubelet[2539]: I0128 01:00:19.822555 2539 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:00:19.828339 kubelet[2539]: E0128 01:00:19.828255 2539 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 01:00:19.828339 kubelet[2539]: I0128 01:00:19.828315 2539 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 01:00:19.840883 kubelet[2539]: I0128 01:00:19.840796 2539 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:00:19.841242 kubelet[2539]: I0128 01:00:19.841173 2539 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:00:19.841479 kubelet[2539]: I0128 01:00:19.841217 2539 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:00:19.841479 kubelet[2539]: I0128 01:00:19.841446 2539 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:00:19.841479 kubelet[2539]: I0128 01:00:19.841457 2539 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:00:19.841774 kubelet[2539]: I0128 01:00:19.841509 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:00:19.841774 kubelet[2539]: I0128 01:00:19.841710 2539 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:00:19.841774 kubelet[2539]: I0128 01:00:19.841735 2539 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:00:19.841774 kubelet[2539]: I0128 01:00:19.841754 2539 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:00:19.841774 kubelet[2539]: I0128 01:00:19.841764 2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:00:19.843309 kubelet[2539]: I0128 01:00:19.843255 2539 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 01:00:19.844457 kubelet[2539]: I0128 01:00:19.844324 2539 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:00:19.845648 kubelet[2539]: I0128 01:00:19.845586 2539 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:00:19.845756 kubelet[2539]: I0128 01:00:19.845704 2539 server.go:1287] "Started kubelet" Jan 28 01:00:19.848692 kubelet[2539]: I0128 01:00:19.847688 2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:00:19.848861 kubelet[2539]: I0128 01:00:19.848798 2539 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:00:19.848921 kubelet[2539]: I0128 01:00:19.848903 2539 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:00:19.849170 kubelet[2539]: I0128 01:00:19.849116 2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:00:19.850005 kubelet[2539]: I0128 01:00:19.849889 2539 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:00:19.865229 kubelet[2539]: I0128 01:00:19.863652 2539 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:00:19.865229 kubelet[2539]: I0128 01:00:19.863715 2539 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:00:19.984424 kubelet[2539]: I0128 01:00:19.984097 2539 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:00:19.984886 kubelet[2539]: I0128 01:00:19.984711 2539 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:00:20.002470 kubelet[2539]: I0128 01:00:20.000547 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:00:20.002779 kubelet[2539]: E0128 01:00:20.002718 2539 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:00:20.004842 kubelet[2539]: I0128 01:00:20.004761 2539 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:00:20.004842 kubelet[2539]: I0128 01:00:20.004802 2539 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:00:20.004947 kubelet[2539]: I0128 01:00:20.004896 2539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:00:20.007639 kubelet[2539]: I0128 01:00:20.006499 2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:00:20.007639 kubelet[2539]: I0128 01:00:20.006541 2539 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:00:20.007639 kubelet[2539]: I0128 01:00:20.006563 2539 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:00:20.007639 kubelet[2539]: I0128 01:00:20.006571 2539 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:00:20.007639 kubelet[2539]: E0128 01:00:20.006626 2539 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:00:20.069576 kubelet[2539]: I0128 01:00:20.069513 2539 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:00:20.069576 kubelet[2539]: I0128 01:00:20.069533 2539 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:00:20.069576 kubelet[2539]: I0128 01:00:20.069555 2539 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:00:20.069809 kubelet[2539]: I0128 01:00:20.069773 2539 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:00:20.069809 kubelet[2539]: I0128 01:00:20.069785 2539 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:00:20.069809 kubelet[2539]: I0128 01:00:20.069805 2539 policy_none.go:49] "None policy: Start" Jan 28 01:00:20.069941 kubelet[2539]: I0128 01:00:20.069816 2539 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:00:20.069941 kubelet[2539]: I0128 01:00:20.069827 2539 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:00:20.070132 kubelet[2539]: I0128 01:00:20.069956 2539 state_mem.go:75] "Updated machine memory state" Jan 28 01:00:20.076815 kubelet[2539]: I0128 01:00:20.076759 2539 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:00:20.077025 kubelet[2539]: I0128 01:00:20.076920 2539 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:00:20.077086 kubelet[2539]: I0128 01:00:20.076966 2539 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:00:20.077795 kubelet[2539]: I0128 01:00:20.077753 2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:00:20.080123 kubelet[2539]: E0128 01:00:20.079749 2539 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:00:20.109435 kubelet[2539]: I0128 01:00:20.109290 2539 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:00:20.110780 kubelet[2539]: I0128 01:00:20.109982 2539 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:20.110780 kubelet[2539]: I0128 01:00:20.110432 2539 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:20.123242 kubelet[2539]: E0128 01:00:20.123159 2539 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 01:00:20.185494 kubelet[2539]: I0128 01:00:20.185324 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:00:20.185494 kubelet[2539]: I0128 01:00:20.185470 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f232a0ddddc3a54d662046f5e309cdb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4f232a0ddddc3a54d662046f5e309cdb\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:20.185494 kubelet[2539]: I0128 01:00:20.185495 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:20.185674 kubelet[2539]: I0128 01:00:20.185511 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:20.185674 kubelet[2539]: I0128 01:00:20.185526 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f232a0ddddc3a54d662046f5e309cdb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f232a0ddddc3a54d662046f5e309cdb\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:20.185674 kubelet[2539]: I0128 01:00:20.185539 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f232a0ddddc3a54d662046f5e309cdb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4f232a0ddddc3a54d662046f5e309cdb\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:20.185674 kubelet[2539]: I0128 01:00:20.185553 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:20.185674 kubelet[2539]: I0128 01:00:20.185567 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:20.185857 kubelet[2539]: I0128 01:00:20.185582 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:00:20.189890 kubelet[2539]: I0128 01:00:20.189854 2539 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:00:20.199133 kubelet[2539]: I0128 01:00:20.199077 2539 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:00:20.199227 kubelet[2539]: I0128 01:00:20.199176 2539 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:00:20.424705 kubelet[2539]: E0128 01:00:20.424571 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:20.426767 kubelet[2539]: E0128 01:00:20.425336 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:20.426767 kubelet[2539]: E0128 01:00:20.425726 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:20.846477 kubelet[2539]: I0128 01:00:20.844248 2539 apiserver.go:52] "Watching apiserver" Jan 28 01:00:20.885647 kubelet[2539]: I0128 01:00:20.885472 2539 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:00:21.056620 kubelet[2539]: E0128 01:00:21.056439 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:21.057471 kubelet[2539]: I0128 01:00:21.056715 2539 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:21.060489 kubelet[2539]: E0128 01:00:21.057341 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:21.066906 kubelet[2539]: E0128 01:00:21.065678 2539 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 28 01:00:21.066906 kubelet[2539]: E0128 01:00:21.065854 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:21.096221 kubelet[2539]: I0128 01:00:21.095995 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.09592789 podStartE2EDuration="3.09592789s" podCreationTimestamp="2026-01-28 01:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:00:21.086123406 +0000 UTC m=+1.347342459" watchObservedRunningTime="2026-01-28 01:00:21.09592789 +0000 UTC m=+1.357146903" Jan 28 01:00:21.096221 kubelet[2539]: I0128 01:00:21.096129 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.096120085 podStartE2EDuration="1.096120085s" podCreationTimestamp="2026-01-28 01:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:00:21.095860252 +0000 UTC m=+1.357079295" watchObservedRunningTime="2026-01-28 01:00:21.096120085 +0000 UTC m=+1.357339118" Jan 28 01:00:22.059935 kubelet[2539]: E0128 01:00:22.059586 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:22.059935 kubelet[2539]: E0128 01:00:22.059873 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:22.061481 kubelet[2539]: E0128 01:00:22.061262 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:24.845700 kubelet[2539]: E0128 01:00:24.845434 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:24.967116 kubelet[2539]: I0128 01:00:24.965015 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.964893439 podStartE2EDuration="4.964893439s" podCreationTimestamp="2026-01-28 01:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:00:21.110664879 +0000 UTC m=+1.371883892" watchObservedRunningTime="2026-01-28 01:00:24.964893439 +0000 UTC m=+5.226112472" Jan 28 01:00:25.081057 kubelet[2539]: E0128 01:00:25.079549 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:25.186832 kubelet[2539]: I0128 01:00:25.186566 2539 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:00:25.187737 containerd[1461]: time="2026-01-28T01:00:25.187184960Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:00:25.188240 kubelet[2539]: I0128 01:00:25.187505 2539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:00:25.589710 systemd[1]: Created slice kubepods-besteffort-pod2183b590_04c5_4619_b682_3b237e7a93de.slice - libcontainer container kubepods-besteffort-pod2183b590_04c5_4619_b682_3b237e7a93de.slice. Jan 28 01:00:25.612314 kubelet[2539]: I0128 01:00:25.612234 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2183b590-04c5-4619-b682-3b237e7a93de-kube-proxy\") pod \"kube-proxy-t9hk8\" (UID: \"2183b590-04c5-4619-b682-3b237e7a93de\") " pod="kube-system/kube-proxy-t9hk8" Jan 28 01:00:25.612314 kubelet[2539]: I0128 01:00:25.612288 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2183b590-04c5-4619-b682-3b237e7a93de-lib-modules\") pod \"kube-proxy-t9hk8\" (UID: \"2183b590-04c5-4619-b682-3b237e7a93de\") " pod="kube-system/kube-proxy-t9hk8" Jan 28 01:00:25.612314 kubelet[2539]: I0128 01:00:25.612318 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2183b590-04c5-4619-b682-3b237e7a93de-xtables-lock\") pod \"kube-proxy-t9hk8\" (UID: \"2183b590-04c5-4619-b682-3b237e7a93de\") " pod="kube-system/kube-proxy-t9hk8" Jan 28 01:00:25.612539 kubelet[2539]: I0128 01:00:25.612462 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvwt\" (UniqueName: \"kubernetes.io/projected/2183b590-04c5-4619-b682-3b237e7a93de-kube-api-access-msvwt\") pod \"kube-proxy-t9hk8\" (UID: \"2183b590-04c5-4619-b682-3b237e7a93de\") " pod="kube-system/kube-proxy-t9hk8" Jan 28 01:00:25.722443 kubelet[2539]: E0128 01:00:25.722320 2539 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 28 01:00:25.723875 kubelet[2539]: E0128 01:00:25.722491 2539 projected.go:194] Error preparing data for projected volume kube-api-access-msvwt for pod kube-system/kube-proxy-t9hk8: configmap "kube-root-ca.crt" not found Jan 28 01:00:25.723875 kubelet[2539]: E0128 01:00:25.722634 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2183b590-04c5-4619-b682-3b237e7a93de-kube-api-access-msvwt podName:2183b590-04c5-4619-b682-3b237e7a93de nodeName:}" failed. No retries permitted until 2026-01-28 01:00:26.222582522 +0000 UTC m=+6.483801535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-msvwt" (UniqueName: "kubernetes.io/projected/2183b590-04c5-4619-b682-3b237e7a93de-kube-api-access-msvwt") pod "kube-proxy-t9hk8" (UID: "2183b590-04c5-4619-b682-3b237e7a93de") : configmap "kube-root-ca.crt" not found Jan 28 01:00:25.997763 systemd[1]: Created slice kubepods-besteffort-pod1e323035_4c7f_48ba_878f_ac42a2a3857c.slice - libcontainer container kubepods-besteffort-pod1e323035_4c7f_48ba_878f_ac42a2a3857c.slice. Jan 28 01:00:26.025289 kubelet[2539]: I0128 01:00:26.025190 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmnqw\" (UniqueName: \"kubernetes.io/projected/1e323035-4c7f-48ba-878f-ac42a2a3857c-kube-api-access-zmnqw\") pod \"tigera-operator-7dcd859c48-7mkgq\" (UID: \"1e323035-4c7f-48ba-878f-ac42a2a3857c\") " pod="tigera-operator/tigera-operator-7dcd859c48-7mkgq" Jan 28 01:00:26.025289 kubelet[2539]: I0128 01:00:26.025254 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e323035-4c7f-48ba-878f-ac42a2a3857c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-7mkgq\" (UID: \"1e323035-4c7f-48ba-878f-ac42a2a3857c\") " pod="tigera-operator/tigera-operator-7dcd859c48-7mkgq" Jan 28 01:00:26.081930 kubelet[2539]: E0128 01:00:26.081842 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:26.303656 containerd[1461]: time="2026-01-28T01:00:26.303610984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7mkgq,Uid:1e323035-4c7f-48ba-878f-ac42a2a3857c,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:00:26.365786 containerd[1461]: time="2026-01-28T01:00:26.365443832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:26.365786 containerd[1461]: time="2026-01-28T01:00:26.365682465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:26.365786 containerd[1461]: time="2026-01-28T01:00:26.365699199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:26.366259 containerd[1461]: time="2026-01-28T01:00:26.365992460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:26.398668 systemd[1]: Started cri-containerd-d0c1373254cd0a5fe0433be65c3db70b65d2296c3b00979268c074598e0b7c0f.scope - libcontainer container d0c1373254cd0a5fe0433be65c3db70b65d2296c3b00979268c074598e0b7c0f. Jan 28 01:00:26.449329 containerd[1461]: time="2026-01-28T01:00:26.449249759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7mkgq,Uid:1e323035-4c7f-48ba-878f-ac42a2a3857c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d0c1373254cd0a5fe0433be65c3db70b65d2296c3b00979268c074598e0b7c0f\"" Jan 28 01:00:26.454539 containerd[1461]: time="2026-01-28T01:00:26.454267171Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:00:26.502600 kubelet[2539]: E0128 01:00:26.502497 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:26.503097 containerd[1461]: time="2026-01-28T01:00:26.503057410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9hk8,Uid:2183b590-04c5-4619-b682-3b237e7a93de,Namespace:kube-system,Attempt:0,}" Jan 28 01:00:26.549928 containerd[1461]: time="2026-01-28T01:00:26.549668699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:26.549928 containerd[1461]: time="2026-01-28T01:00:26.549773256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:26.549928 containerd[1461]: time="2026-01-28T01:00:26.549784659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:26.550221 containerd[1461]: time="2026-01-28T01:00:26.549903354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:26.579586 systemd[1]: Started cri-containerd-b3aead4c62c403aa137362ed82829c70a9327d9aa44b9da0d2b0112223bf11f3.scope - libcontainer container b3aead4c62c403aa137362ed82829c70a9327d9aa44b9da0d2b0112223bf11f3. Jan 28 01:00:26.617946 containerd[1461]: time="2026-01-28T01:00:26.617823491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t9hk8,Uid:2183b590-04c5-4619-b682-3b237e7a93de,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3aead4c62c403aa137362ed82829c70a9327d9aa44b9da0d2b0112223bf11f3\"" Jan 28 01:00:26.619166 kubelet[2539]: E0128 01:00:26.619043 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:26.623101 containerd[1461]: time="2026-01-28T01:00:26.622948571Z" level=info msg="CreateContainer within sandbox \"b3aead4c62c403aa137362ed82829c70a9327d9aa44b9da0d2b0112223bf11f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:00:26.656194 containerd[1461]: time="2026-01-28T01:00:26.656096742Z" level=info msg="CreateContainer within sandbox \"b3aead4c62c403aa137362ed82829c70a9327d9aa44b9da0d2b0112223bf11f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"05b39b2c6d0bc850fe3f32dd6ec6810075a024c27009b0080063698674f2902c\"" Jan 28 01:00:26.656971 containerd[1461]: time="2026-01-28T01:00:26.656890648Z" level=info msg="StartContainer for \"05b39b2c6d0bc850fe3f32dd6ec6810075a024c27009b0080063698674f2902c\"" Jan 28 01:00:26.695566 systemd[1]: Started cri-containerd-05b39b2c6d0bc850fe3f32dd6ec6810075a024c27009b0080063698674f2902c.scope - libcontainer container 05b39b2c6d0bc850fe3f32dd6ec6810075a024c27009b0080063698674f2902c. Jan 28 01:00:26.745631 containerd[1461]: time="2026-01-28T01:00:26.745493724Z" level=info msg="StartContainer for \"05b39b2c6d0bc850fe3f32dd6ec6810075a024c27009b0080063698674f2902c\" returns successfully" Jan 28 01:00:27.085802 kubelet[2539]: E0128 01:00:27.085729 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:27.911798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263220908.mount: Deactivated successfully. Jan 28 01:00:29.856417 kubelet[2539]: E0128 01:00:29.856090 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:29.930760 kubelet[2539]: I0128 01:00:29.930274 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t9hk8" podStartSLOduration=4.9300366879999995 podStartE2EDuration="4.930036688s" podCreationTimestamp="2026-01-28 01:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:00:27.098622779 +0000 UTC m=+7.359841791" watchObservedRunningTime="2026-01-28 01:00:29.930036688 +0000 UTC m=+10.191255741" Jan 28 01:00:30.251612 kubelet[2539]: E0128 01:00:30.251282 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:32.170459 kubelet[2539]: E0128 01:00:32.169280 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:33.211043 containerd[1461]: time="2026-01-28T01:00:33.210791115Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:33.212146 containerd[1461]: time="2026-01-28T01:00:33.211593544Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 28 01:00:33.222882 containerd[1461]: time="2026-01-28T01:00:33.222672289Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:33.277129 containerd[1461]: time="2026-01-28T01:00:33.276954294Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:33.280223 containerd[1461]: time="2026-01-28T01:00:33.279975516Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 6.825678589s" Jan 28 01:00:33.280533 containerd[1461]: time="2026-01-28T01:00:33.280320544Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 01:00:33.289735 containerd[1461]: time="2026-01-28T01:00:33.289229056Z" level=info msg="CreateContainer within sandbox \"d0c1373254cd0a5fe0433be65c3db70b65d2296c3b00979268c074598e0b7c0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:00:33.358670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3717004298.mount: Deactivated successfully. Jan 28 01:00:33.381725 containerd[1461]: time="2026-01-28T01:00:33.381324566Z" level=info msg="CreateContainer within sandbox \"d0c1373254cd0a5fe0433be65c3db70b65d2296c3b00979268c074598e0b7c0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a46d58d0abc6294637b9ddc9215251d7e8cd067d45460b934c86c7a7da4e13b4\"" Jan 28 01:00:33.393126 containerd[1461]: time="2026-01-28T01:00:33.392981997Z" level=info msg="StartContainer for \"a46d58d0abc6294637b9ddc9215251d7e8cd067d45460b934c86c7a7da4e13b4\"" Jan 28 01:00:33.558830 systemd[1]: Started cri-containerd-a46d58d0abc6294637b9ddc9215251d7e8cd067d45460b934c86c7a7da4e13b4.scope - libcontainer container a46d58d0abc6294637b9ddc9215251d7e8cd067d45460b934c86c7a7da4e13b4. Jan 28 01:00:33.684103 containerd[1461]: time="2026-01-28T01:00:33.683958838Z" level=info msg="StartContainer for \"a46d58d0abc6294637b9ddc9215251d7e8cd067d45460b934c86c7a7da4e13b4\" returns successfully" Jan 28 01:00:35.317852 kubelet[2539]: I0128 01:00:35.317068 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-7mkgq" podStartSLOduration=3.481908702 podStartE2EDuration="10.316850778s" podCreationTimestamp="2026-01-28 01:00:25 +0000 UTC" firstStartedPulling="2026-01-28 01:00:26.451704962 +0000 UTC m=+6.712923975" lastFinishedPulling="2026-01-28 01:00:33.286647038 +0000 UTC m=+13.547866051" observedRunningTime="2026-01-28 01:00:35.31204045 +0000 UTC m=+15.573259462" watchObservedRunningTime="2026-01-28 01:00:35.316850778 +0000 UTC m=+15.578069790" Jan 28 01:00:45.530442 sudo[1641]: pam_unix(sudo:session): session closed for user root Jan 28 01:00:45.536226 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:45.551620 systemd[1]: sshd@6-10.0.0.45:22-10.0.0.1:50400.service: Deactivated successfully. Jan 28 01:00:45.564997 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:00:45.567442 systemd[1]: session-7.scope: Consumed 11.681s CPU time, 159.5M memory peak, 0B memory swap peak. Jan 28 01:00:45.570925 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:00:45.574767 systemd-logind[1447]: Removed session 7. Jan 28 01:00:50.695587 systemd[1]: Created slice kubepods-besteffort-poda4174d08_a2fd_4642_855c_9a9b4b324256.slice - libcontainer container kubepods-besteffort-poda4174d08_a2fd_4642_855c_9a9b4b324256.slice. Jan 28 01:00:50.831879 kubelet[2539]: I0128 01:00:50.831823 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a4174d08-a2fd-4642-855c-9a9b4b324256-typha-certs\") pod \"calico-typha-696b75776-wxkp8\" (UID: \"a4174d08-a2fd-4642-855c-9a9b4b324256\") " pod="calico-system/calico-typha-696b75776-wxkp8" Jan 28 01:00:50.831879 kubelet[2539]: I0128 01:00:50.831876 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2vkx\" (UniqueName: \"kubernetes.io/projected/a4174d08-a2fd-4642-855c-9a9b4b324256-kube-api-access-l2vkx\") pod \"calico-typha-696b75776-wxkp8\" (UID: \"a4174d08-a2fd-4642-855c-9a9b4b324256\") " pod="calico-system/calico-typha-696b75776-wxkp8" Jan 28 01:00:50.833220 kubelet[2539]: I0128 01:00:50.832239 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4174d08-a2fd-4642-855c-9a9b4b324256-tigera-ca-bundle\") pod \"calico-typha-696b75776-wxkp8\" (UID: \"a4174d08-a2fd-4642-855c-9a9b4b324256\") " pod="calico-system/calico-typha-696b75776-wxkp8" Jan 28 01:00:50.886933 systemd[1]: Created slice kubepods-besteffort-podcfa669e8_1b65_4ab3_b462_cccde3d0d9c7.slice - libcontainer container kubepods-besteffort-podcfa669e8_1b65_4ab3_b462_cccde3d0d9c7.slice. Jan 28 01:00:51.008998 kubelet[2539]: E0128 01:00:51.008779 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:51.009881 containerd[1461]: time="2026-01-28T01:00:51.009832604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-696b75776-wxkp8,Uid:a4174d08-a2fd-4642-855c-9a9b4b324256,Namespace:calico-system,Attempt:0,}" Jan 28 01:00:51.046440 kubelet[2539]: I0128 01:00:51.046050 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-lib-modules\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046440 kubelet[2539]: I0128 01:00:51.046092 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-node-certs\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046440 kubelet[2539]: I0128 01:00:51.046109 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbqjv\" (UniqueName: \"kubernetes.io/projected/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-kube-api-access-jbqjv\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046440 kubelet[2539]: I0128 01:00:51.046124 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-var-lib-calico\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046440 kubelet[2539]: I0128 01:00:51.046140 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-cni-net-dir\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046689 kubelet[2539]: I0128 01:00:51.046186 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-var-run-calico\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046689 kubelet[2539]: I0128 01:00:51.046202 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-cni-log-dir\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046689 kubelet[2539]: I0128 01:00:51.046214 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-policysync\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046689 kubelet[2539]: I0128 01:00:51.046255 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-cni-bin-dir\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046689 kubelet[2539]: I0128 01:00:51.046308 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-flexvol-driver-host\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046793 kubelet[2539]: I0128 01:00:51.046424 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-xtables-lock\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.046793 kubelet[2539]: I0128 01:00:51.046468 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfa669e8-1b65-4ab3-b462-cccde3d0d9c7-tigera-ca-bundle\") pod \"calico-node-smgl8\" (UID: \"cfa669e8-1b65-4ab3-b462-cccde3d0d9c7\") " pod="calico-system/calico-node-smgl8" Jan 28 01:00:51.075822 containerd[1461]: time="2026-01-28T01:00:51.075500891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:51.077037 containerd[1461]: time="2026-01-28T01:00:51.075894994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:51.077037 containerd[1461]: time="2026-01-28T01:00:51.075956413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:51.077037 containerd[1461]: time="2026-01-28T01:00:51.076679411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:51.100878 kubelet[2539]: E0128 01:00:51.100177 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:00:51.130608 systemd[1]: Started cri-containerd-ba46c2bbd645589159376b81e8b61145e688e8d29e5910a52e4c3cf2dcd75733.scope - libcontainer container ba46c2bbd645589159376b81e8b61145e688e8d29e5910a52e4c3cf2dcd75733. Jan 28 01:00:51.156312 kubelet[2539]: E0128 01:00:51.155188 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.156312 kubelet[2539]: W0128 01:00:51.155218 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.156312 kubelet[2539]: E0128 01:00:51.155309 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.157085 kubelet[2539]: E0128 01:00:51.157026 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.157085 kubelet[2539]: W0128 01:00:51.157064 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.157085 kubelet[2539]: E0128 01:00:51.157076 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.172749 kubelet[2539]: E0128 01:00:51.172671 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.172749 kubelet[2539]: W0128 01:00:51.172717 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.172749 kubelet[2539]: E0128 01:00:51.172734 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.190737 kubelet[2539]: E0128 01:00:51.190663 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:51.191226 containerd[1461]: time="2026-01-28T01:00:51.191139518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smgl8,Uid:cfa669e8-1b65-4ab3-b462-cccde3d0d9c7,Namespace:calico-system,Attempt:0,}" Jan 28 01:00:51.214325 kubelet[2539]: E0128 01:00:51.214273 2539 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4174d08_a2fd_4642_855c_9a9b4b324256.slice/cri-containerd-ba46c2bbd645589159376b81e8b61145e688e8d29e5910a52e4c3cf2dcd75733.scope\": RecentStats: unable to find data in memory cache]" Jan 28 01:00:51.243702 containerd[1461]: time="2026-01-28T01:00:51.243462733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-696b75776-wxkp8,Uid:a4174d08-a2fd-4642-855c-9a9b4b324256,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba46c2bbd645589159376b81e8b61145e688e8d29e5910a52e4c3cf2dcd75733\"" Jan 28 01:00:51.247117 containerd[1461]: time="2026-01-28T01:00:51.246812823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:51.247117 containerd[1461]: time="2026-01-28T01:00:51.246885033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:51.247117 containerd[1461]: time="2026-01-28T01:00:51.246898549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:51.247959 containerd[1461]: time="2026-01-28T01:00:51.247071914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:51.248678 kubelet[2539]: E0128 01:00:51.248610 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.248678 kubelet[2539]: W0128 01:00:51.248663 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.249428 kubelet[2539]: E0128 01:00:51.248689 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.249428 kubelet[2539]: I0128 01:00:51.248754 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11-socket-dir\") pod \"csi-node-driver-zzx59\" (UID: \"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11\") " pod="calico-system/csi-node-driver-zzx59" Jan 28 01:00:51.249937 kubelet[2539]: E0128 01:00:51.249896 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.250023 kubelet[2539]: W0128 01:00:51.249965 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.250056 kubelet[2539]: E0128 01:00:51.250022 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.250929 kubelet[2539]: E0128 01:00:51.250905 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.250929 kubelet[2539]: W0128 01:00:51.250918 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.251041 kubelet[2539]: E0128 01:00:51.250932 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.251515 kubelet[2539]: E0128 01:00:51.251470 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.251515 kubelet[2539]: W0128 01:00:51.251507 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.251515 kubelet[2539]: E0128 01:00:51.251520 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.251688 kubelet[2539]: I0128 01:00:51.251575 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11-registration-dir\") pod \"csi-node-driver-zzx59\" (UID: \"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11\") " pod="calico-system/csi-node-driver-zzx59" Jan 28 01:00:51.254419 kubelet[2539]: E0128 01:00:51.252010 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.254419 kubelet[2539]: W0128 01:00:51.252026 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.254419 kubelet[2539]: E0128 01:00:51.252042 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.254419 kubelet[2539]: E0128 01:00:51.252455 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.254419 kubelet[2539]: W0128 01:00:51.252464 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.254419 kubelet[2539]: E0128 01:00:51.252478 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.254419 kubelet[2539]: E0128 01:00:51.252965 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.254419 kubelet[2539]: W0128 01:00:51.253027 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.254419 kubelet[2539]: E0128 01:00:51.253078 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.254608 kubelet[2539]: I0128 01:00:51.253128 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11-varrun\") pod \"csi-node-driver-zzx59\" (UID: \"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11\") " pod="calico-system/csi-node-driver-zzx59" Jan 28 01:00:51.254669 kubelet[2539]: E0128 01:00:51.254643 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:51.255097 kubelet[2539]: E0128 01:00:51.255016 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.255097 kubelet[2539]: W0128 01:00:51.255049 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.255097 kubelet[2539]: E0128 01:00:51.255061 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.255097 kubelet[2539]: I0128 01:00:51.255082 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11-kubelet-dir\") pod \"csi-node-driver-zzx59\" (UID: \"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11\") " pod="calico-system/csi-node-driver-zzx59" Jan 28 01:00:51.255569 kubelet[2539]: E0128 01:00:51.255475 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.255569 kubelet[2539]: W0128 01:00:51.255510 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.255905 kubelet[2539]: E0128 01:00:51.255523 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.255944 containerd[1461]: time="2026-01-28T01:00:51.255885275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:00:51.256148 kubelet[2539]: I0128 01:00:51.256022 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zjlj\" (UniqueName: \"kubernetes.io/projected/e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11-kube-api-access-9zjlj\") pod \"csi-node-driver-zzx59\" (UID: \"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11\") " pod="calico-system/csi-node-driver-zzx59" Jan 28 01:00:51.256586 kubelet[2539]: E0128 01:00:51.256544 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.256586 kubelet[2539]: W0128 01:00:51.256573 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.256586 kubelet[2539]: E0128 01:00:51.256585 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.256959 kubelet[2539]: E0128 01:00:51.256899 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.256959 kubelet[2539]: W0128 01:00:51.256934 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.257270 kubelet[2539]: E0128 01:00:51.257101 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.257440 kubelet[2539]: E0128 01:00:51.257409 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.257440 kubelet[2539]: W0128 01:00:51.257437 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.257643 kubelet[2539]: E0128 01:00:51.257597 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.257849 kubelet[2539]: E0128 01:00:51.257807 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.257849 kubelet[2539]: W0128 01:00:51.257834 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.257905 kubelet[2539]: E0128 01:00:51.257877 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.258959 kubelet[2539]: E0128 01:00:51.258328 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.258959 kubelet[2539]: W0128 01:00:51.258342 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.258959 kubelet[2539]: E0128 01:00:51.258415 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.258959 kubelet[2539]: E0128 01:00:51.258943 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.259265 kubelet[2539]: W0128 01:00:51.259092 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.259265 kubelet[2539]: E0128 01:00:51.259114 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.290922 systemd[1]: Started cri-containerd-116020c07182b5a5de4cb8ccf58bf5c013e9964c73d0bdd80e9120ab527fa6df.scope - libcontainer container 116020c07182b5a5de4cb8ccf58bf5c013e9964c73d0bdd80e9120ab527fa6df. Jan 28 01:00:51.336308 containerd[1461]: time="2026-01-28T01:00:51.335203928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smgl8,Uid:cfa669e8-1b65-4ab3-b462-cccde3d0d9c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"116020c07182b5a5de4cb8ccf58bf5c013e9964c73d0bdd80e9120ab527fa6df\"" Jan 28 01:00:51.338595 kubelet[2539]: E0128 01:00:51.338471 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:51.357261 kubelet[2539]: E0128 01:00:51.357170 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.357261 kubelet[2539]: W0128 01:00:51.357215 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.357261 kubelet[2539]: E0128 01:00:51.357239 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.358048 kubelet[2539]: E0128 01:00:51.357932 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.358048 kubelet[2539]: W0128 01:00:51.357977 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.358048 kubelet[2539]: E0128 01:00:51.358030 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.358681 kubelet[2539]: E0128 01:00:51.358583 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.358681 kubelet[2539]: W0128 01:00:51.358621 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.358681 kubelet[2539]: E0128 01:00:51.358637 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.359121 kubelet[2539]: E0128 01:00:51.359081 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.359333 kubelet[2539]: W0128 01:00:51.359126 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.359333 kubelet[2539]: E0128 01:00:51.359263 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.359700 kubelet[2539]: E0128 01:00:51.359545 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.359700 kubelet[2539]: W0128 01:00:51.359554 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.359830 kubelet[2539]: E0128 01:00:51.359737 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.359978 kubelet[2539]: E0128 01:00:51.359932 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.359978 kubelet[2539]: W0128 01:00:51.359942 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.360157 kubelet[2539]: E0128 01:00:51.360128 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.360483 kubelet[2539]: E0128 01:00:51.360407 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.360652 kubelet[2539]: W0128 01:00:51.360528 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.360940 kubelet[2539]: E0128 01:00:51.360787 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.361338 kubelet[2539]: E0128 01:00:51.361271 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.361484 kubelet[2539]: W0128 01:00:51.361465 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.361742 kubelet[2539]: E0128 01:00:51.361630 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.362511 kubelet[2539]: E0128 01:00:51.362419 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.362511 kubelet[2539]: W0128 01:00:51.362432 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.362743 kubelet[2539]: E0128 01:00:51.362620 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.363307 kubelet[2539]: E0128 01:00:51.363129 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.363307 kubelet[2539]: W0128 01:00:51.363213 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.363525 kubelet[2539]: E0128 01:00:51.363435 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.364119 kubelet[2539]: E0128 01:00:51.363903 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.364119 kubelet[2539]: W0128 01:00:51.363915 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.364119 kubelet[2539]: E0128 01:00:51.364029 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.365190 kubelet[2539]: E0128 01:00:51.364885 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.365190 kubelet[2539]: W0128 01:00:51.364897 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.365190 kubelet[2539]: E0128 01:00:51.364977 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.365485 kubelet[2539]: E0128 01:00:51.365473 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.365732 kubelet[2539]: W0128 01:00:51.365614 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.366186 kubelet[2539]: E0128 01:00:51.365908 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.366626 kubelet[2539]: E0128 01:00:51.366609 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.366626 kubelet[2539]: W0128 01:00:51.366724 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.367278 kubelet[2539]: E0128 01:00:51.367086 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.369120 kubelet[2539]: E0128 01:00:51.368779 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.369120 kubelet[2539]: W0128 01:00:51.368792 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.369120 kubelet[2539]: E0128 01:00:51.368976 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.370694 kubelet[2539]: E0128 01:00:51.370595 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.371058 kubelet[2539]: W0128 01:00:51.370842 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.371225 kubelet[2539]: E0128 01:00:51.371210 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.371566 kubelet[2539]: E0128 01:00:51.371554 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.371700 kubelet[2539]: W0128 01:00:51.371688 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.372061 kubelet[2539]: E0128 01:00:51.371947 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.373535 kubelet[2539]: E0128 01:00:51.373404 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.375425 kubelet[2539]: W0128 01:00:51.375406 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.375626 kubelet[2539]: E0128 01:00:51.375612 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.375858 kubelet[2539]: E0128 01:00:51.375847 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.376290 kubelet[2539]: W0128 01:00:51.375912 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.376341 kubelet[2539]: E0128 01:00:51.376293 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.376341 kubelet[2539]: W0128 01:00:51.376304 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.377094 kubelet[2539]: E0128 01:00:51.376961 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.377686 kubelet[2539]: E0128 01:00:51.377425 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.377686 kubelet[2539]: W0128 01:00:51.377439 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.377686 kubelet[2539]: E0128 01:00:51.377573 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.377686 kubelet[2539]: E0128 01:00:51.377655 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.379032 kubelet[2539]: E0128 01:00:51.378559 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.379032 kubelet[2539]: W0128 01:00:51.378571 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.380655 kubelet[2539]: E0128 01:00:51.380599 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.380743 kubelet[2539]: E0128 01:00:51.380711 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.380743 kubelet[2539]: W0128 01:00:51.380720 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.382401 kubelet[2539]: E0128 01:00:51.380834 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.382401 kubelet[2539]: E0128 01:00:51.381218 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.382401 kubelet[2539]: W0128 01:00:51.381227 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.382401 kubelet[2539]: E0128 01:00:51.381279 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.384084 kubelet[2539]: E0128 01:00:51.384043 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.384131 kubelet[2539]: W0128 01:00:51.384120 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.384164 kubelet[2539]: E0128 01:00:51.384132 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.403425 kubelet[2539]: E0128 01:00:51.401524 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:51.403425 kubelet[2539]: W0128 01:00:51.401561 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:51.403425 kubelet[2539]: E0128 01:00:51.401598 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:51.964435 systemd[1]: run-containerd-runc-k8s.io-ba46c2bbd645589159376b81e8b61145e688e8d29e5910a52e4c3cf2dcd75733-runc.2MKaOr.mount: Deactivated successfully. Jan 28 01:00:52.425747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622160346.mount: Deactivated successfully. Jan 28 01:00:53.007530 kubelet[2539]: E0128 01:00:53.007430 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:00:53.318698 containerd[1461]: time="2026-01-28T01:00:53.318640579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:53.319813 containerd[1461]: time="2026-01-28T01:00:53.319716322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 28 01:00:53.321785 containerd[1461]: time="2026-01-28T01:00:53.321717655Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:53.324478 containerd[1461]: time="2026-01-28T01:00:53.324342403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:53.325275 containerd[1461]: time="2026-01-28T01:00:53.325015655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.069087967s" Jan 28 01:00:53.325275 containerd[1461]: time="2026-01-28T01:00:53.325057446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 01:00:53.335706 containerd[1461]: time="2026-01-28T01:00:53.335666005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:00:53.353737 containerd[1461]: time="2026-01-28T01:00:53.353617782Z" level=info msg="CreateContainer within sandbox \"ba46c2bbd645589159376b81e8b61145e688e8d29e5910a52e4c3cf2dcd75733\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:00:53.415109 containerd[1461]: time="2026-01-28T01:00:53.415014914Z" level=info msg="CreateContainer within sandbox \"ba46c2bbd645589159376b81e8b61145e688e8d29e5910a52e4c3cf2dcd75733\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e7f46a14b50dd2c8eee76e785c97b0856f483d5a085543446067f31fe24ca083\"" Jan 28 01:00:53.418977 containerd[1461]: time="2026-01-28T01:00:53.418177076Z" level=info msg="StartContainer for \"e7f46a14b50dd2c8eee76e785c97b0856f483d5a085543446067f31fe24ca083\"" Jan 28 01:00:53.498827 systemd[1]: Started cri-containerd-e7f46a14b50dd2c8eee76e785c97b0856f483d5a085543446067f31fe24ca083.scope - libcontainer container e7f46a14b50dd2c8eee76e785c97b0856f483d5a085543446067f31fe24ca083. Jan 28 01:00:53.622300 containerd[1461]: time="2026-01-28T01:00:53.621834173Z" level=info msg="StartContainer for \"e7f46a14b50dd2c8eee76e785c97b0856f483d5a085543446067f31fe24ca083\" returns successfully" Jan 28 01:00:54.123109 kubelet[2539]: E0128 01:00:54.122968 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:54.156505 kubelet[2539]: I0128 01:00:54.156407 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-696b75776-wxkp8" podStartSLOduration=2.082881145 podStartE2EDuration="4.156328315s" podCreationTimestamp="2026-01-28 01:00:50 +0000 UTC" firstStartedPulling="2026-01-28 01:00:51.255475092 +0000 UTC m=+31.516694104" lastFinishedPulling="2026-01-28 01:00:53.328922261 +0000 UTC m=+33.590141274" observedRunningTime="2026-01-28 01:00:54.155936008 +0000 UTC m=+34.417155021" watchObservedRunningTime="2026-01-28 01:00:54.156328315 +0000 UTC m=+34.417547328" Jan 28 01:00:54.193000 kubelet[2539]: E0128 01:00:54.192922 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.193000 kubelet[2539]: W0128 01:00:54.192973 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.193000 kubelet[2539]: E0128 01:00:54.192998 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.193564 kubelet[2539]: E0128 01:00:54.193519 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.193564 kubelet[2539]: W0128 01:00:54.193554 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.193564 kubelet[2539]: E0128 01:00:54.193568 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.193936 kubelet[2539]: E0128 01:00:54.193865 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.193936 kubelet[2539]: W0128 01:00:54.193902 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.193936 kubelet[2539]: E0128 01:00:54.193914 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.194769 kubelet[2539]: E0128 01:00:54.194732 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.194769 kubelet[2539]: W0128 01:00:54.194760 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.194769 kubelet[2539]: E0128 01:00:54.194770 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.195306 kubelet[2539]: E0128 01:00:54.195283 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.195306 kubelet[2539]: W0128 01:00:54.195298 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.195306 kubelet[2539]: E0128 01:00:54.195308 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.196032 kubelet[2539]: E0128 01:00:54.195871 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.196032 kubelet[2539]: W0128 01:00:54.195911 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.196032 kubelet[2539]: E0128 01:00:54.195924 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.197791 kubelet[2539]: E0128 01:00:54.197748 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.197791 kubelet[2539]: W0128 01:00:54.197765 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.197791 kubelet[2539]: E0128 01:00:54.197777 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.198392 kubelet[2539]: E0128 01:00:54.198233 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.198392 kubelet[2539]: W0128 01:00:54.198269 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.198392 kubelet[2539]: E0128 01:00:54.198279 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.198819 kubelet[2539]: E0128 01:00:54.198775 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.198819 kubelet[2539]: W0128 01:00:54.198808 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.198819 kubelet[2539]: E0128 01:00:54.198818 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.199093 kubelet[2539]: E0128 01:00:54.199073 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.199093 kubelet[2539]: W0128 01:00:54.199087 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.199093 kubelet[2539]: E0128 01:00:54.199097 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.199490 kubelet[2539]: E0128 01:00:54.199455 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.199490 kubelet[2539]: W0128 01:00:54.199487 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.199569 kubelet[2539]: E0128 01:00:54.199496 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.199980 kubelet[2539]: E0128 01:00:54.199935 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.199980 kubelet[2539]: W0128 01:00:54.199966 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.199980 kubelet[2539]: E0128 01:00:54.199975 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.200663 kubelet[2539]: E0128 01:00:54.200617 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.200663 kubelet[2539]: W0128 01:00:54.200650 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.200663 kubelet[2539]: E0128 01:00:54.200662 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.201242 kubelet[2539]: E0128 01:00:54.201134 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.201242 kubelet[2539]: W0128 01:00:54.201175 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.201242 kubelet[2539]: E0128 01:00:54.201220 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.201696 kubelet[2539]: E0128 01:00:54.201638 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.201696 kubelet[2539]: W0128 01:00:54.201674 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.201696 kubelet[2539]: E0128 01:00:54.201684 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.288677 kubelet[2539]: E0128 01:00:54.288623 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.288677 kubelet[2539]: W0128 01:00:54.288659 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.288677 kubelet[2539]: E0128 01:00:54.288691 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.289578 kubelet[2539]: E0128 01:00:54.289507 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.289578 kubelet[2539]: W0128 01:00:54.289521 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.289578 kubelet[2539]: E0128 01:00:54.289544 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.290638 kubelet[2539]: E0128 01:00:54.290600 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.290638 kubelet[2539]: W0128 01:00:54.290626 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.290751 kubelet[2539]: E0128 01:00:54.290735 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.291245 kubelet[2539]: E0128 01:00:54.291123 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.291245 kubelet[2539]: W0128 01:00:54.291227 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.291440 kubelet[2539]: E0128 01:00:54.291403 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.291905 kubelet[2539]: E0128 01:00:54.291872 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.291978 kubelet[2539]: W0128 01:00:54.291909 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.292069 kubelet[2539]: E0128 01:00:54.292038 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.292499 kubelet[2539]: E0128 01:00:54.292464 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.292543 kubelet[2539]: W0128 01:00:54.292504 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.292642 kubelet[2539]: E0128 01:00:54.292600 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.292933 kubelet[2539]: E0128 01:00:54.292898 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.293036 kubelet[2539]: W0128 01:00:54.292937 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.293036 kubelet[2539]: E0128 01:00:54.293003 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.293559 kubelet[2539]: E0128 01:00:54.293516 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.293559 kubelet[2539]: W0128 01:00:54.293556 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.293642 kubelet[2539]: E0128 01:00:54.293607 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.294057 kubelet[2539]: E0128 01:00:54.294002 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.294057 kubelet[2539]: W0128 01:00:54.294047 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.294228 kubelet[2539]: E0128 01:00:54.294152 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.294594 kubelet[2539]: E0128 01:00:54.294545 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.294594 kubelet[2539]: W0128 01:00:54.294582 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.294793 kubelet[2539]: E0128 01:00:54.294696 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.294967 kubelet[2539]: E0128 01:00:54.294938 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.295002 kubelet[2539]: W0128 01:00:54.294967 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.295063 kubelet[2539]: E0128 01:00:54.295026 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.295516 kubelet[2539]: E0128 01:00:54.295487 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.295516 kubelet[2539]: W0128 01:00:54.295515 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.295587 kubelet[2539]: E0128 01:00:54.295553 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.296115 kubelet[2539]: E0128 01:00:54.296058 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.296115 kubelet[2539]: W0128 01:00:54.296107 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.296241 kubelet[2539]: E0128 01:00:54.296164 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.296680 kubelet[2539]: E0128 01:00:54.296651 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.296680 kubelet[2539]: W0128 01:00:54.296680 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.296738 kubelet[2539]: E0128 01:00:54.296715 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.297266 kubelet[2539]: E0128 01:00:54.297223 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.297266 kubelet[2539]: W0128 01:00:54.297255 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.297322 kubelet[2539]: E0128 01:00:54.297291 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.297925 kubelet[2539]: E0128 01:00:54.297876 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.297925 kubelet[2539]: W0128 01:00:54.297921 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.297994 kubelet[2539]: E0128 01:00:54.297980 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.298478 kubelet[2539]: E0128 01:00:54.298439 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.298478 kubelet[2539]: W0128 01:00:54.298469 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.298478 kubelet[2539]: E0128 01:00:54.298480 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.298933 kubelet[2539]: E0128 01:00:54.298858 2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:54.298933 kubelet[2539]: W0128 01:00:54.298904 2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:54.298933 kubelet[2539]: E0128 01:00:54.298921 2539 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:54.981111 containerd[1461]: time="2026-01-28T01:00:54.981033908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:54.982198 containerd[1461]: time="2026-01-28T01:00:54.982118625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 28 01:00:54.983661 containerd[1461]: time="2026-01-28T01:00:54.983590360Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:54.986833 containerd[1461]: time="2026-01-28T01:00:54.986679973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:54.987511 containerd[1461]: time="2026-01-28T01:00:54.987435732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.651717304s" Jan 28 01:00:54.987610 containerd[1461]: time="2026-01-28T01:00:54.987511830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 01:00:54.990774 containerd[1461]: time="2026-01-28T01:00:54.990667430Z" level=info msg="CreateContainer within sandbox \"116020c07182b5a5de4cb8ccf58bf5c013e9964c73d0bdd80e9120ab527fa6df\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:00:55.007690 kubelet[2539]: E0128 01:00:55.007533 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:00:55.008702 containerd[1461]: time="2026-01-28T01:00:55.008656129Z" level=info msg="CreateContainer within sandbox \"116020c07182b5a5de4cb8ccf58bf5c013e9964c73d0bdd80e9120ab527fa6df\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0adbc3da240959d3548382c4c8e9e86f06b781fc15c7f3e763f7fe1e7d00fbf7\"" Jan 28 01:00:55.009272 containerd[1461]: time="2026-01-28T01:00:55.009217380Z" level=info msg="StartContainer for \"0adbc3da240959d3548382c4c8e9e86f06b781fc15c7f3e763f7fe1e7d00fbf7\"" Jan 28 01:00:55.062595 systemd[1]: Started cri-containerd-0adbc3da240959d3548382c4c8e9e86f06b781fc15c7f3e763f7fe1e7d00fbf7.scope - libcontainer container 0adbc3da240959d3548382c4c8e9e86f06b781fc15c7f3e763f7fe1e7d00fbf7. Jan 28 01:00:55.126889 containerd[1461]: time="2026-01-28T01:00:55.126707330Z" level=info msg="StartContainer for \"0adbc3da240959d3548382c4c8e9e86f06b781fc15c7f3e763f7fe1e7d00fbf7\" returns successfully" Jan 28 01:00:55.131553 kubelet[2539]: E0128 01:00:55.131433 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:55.131925 kubelet[2539]: E0128 01:00:55.131831 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:55.149475 systemd[1]: cri-containerd-0adbc3da240959d3548382c4c8e9e86f06b781fc15c7f3e763f7fe1e7d00fbf7.scope: Deactivated successfully. Jan 28 01:00:55.195869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0adbc3da240959d3548382c4c8e9e86f06b781fc15c7f3e763f7fe1e7d00fbf7-rootfs.mount: Deactivated successfully. Jan 28 01:00:55.219044 containerd[1461]: time="2026-01-28T01:00:55.218843245Z" level=info msg="shim disconnected" id=0adbc3da240959d3548382c4c8e9e86f06b781fc15c7f3e763f7fe1e7d00fbf7 namespace=k8s.io Jan 28 01:00:55.219193 containerd[1461]: time="2026-01-28T01:00:55.219060004Z" level=warning msg="cleaning up after shim disconnected" id=0adbc3da240959d3548382c4c8e9e86f06b781fc15c7f3e763f7fe1e7d00fbf7 namespace=k8s.io Jan 28 01:00:55.219193 containerd[1461]: time="2026-01-28T01:00:55.219070974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:00:56.138806 kubelet[2539]: E0128 01:00:56.138239 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:56.138806 kubelet[2539]: E0128 01:00:56.138633 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:00:56.140257 containerd[1461]: time="2026-01-28T01:00:56.139656797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:00:57.008036 kubelet[2539]: E0128 01:00:57.007975 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:00:59.007468 kubelet[2539]: E0128 01:00:59.007300 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:00:59.962546 containerd[1461]: time="2026-01-28T01:00:59.962286033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:59.963943 containerd[1461]: time="2026-01-28T01:00:59.963864786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 28 01:00:59.965644 containerd[1461]: time="2026-01-28T01:00:59.965550192Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:59.969167 containerd[1461]: time="2026-01-28T01:00:59.969089272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:59.969894 containerd[1461]: time="2026-01-28T01:00:59.969804302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.83010287s" Jan 28 01:00:59.969894 containerd[1461]: time="2026-01-28T01:00:59.969842891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 01:00:59.973265 containerd[1461]: time="2026-01-28T01:00:59.973105454Z" level=info msg="CreateContainer within sandbox \"116020c07182b5a5de4cb8ccf58bf5c013e9964c73d0bdd80e9120ab527fa6df\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:01:00.000053 containerd[1461]: time="2026-01-28T01:00:59.999901869Z" level=info msg="CreateContainer within sandbox \"116020c07182b5a5de4cb8ccf58bf5c013e9964c73d0bdd80e9120ab527fa6df\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64\"" Jan 28 01:01:00.001047 containerd[1461]: time="2026-01-28T01:01:00.000930120Z" level=info msg="StartContainer for \"356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64\"" Jan 28 01:01:00.111797 systemd[1]: Started cri-containerd-356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64.scope - libcontainer container 356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64. Jan 28 01:01:00.173263 containerd[1461]: time="2026-01-28T01:01:00.172008971Z" level=info msg="StartContainer for \"356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64\" returns successfully" Jan 28 01:01:01.008102 kubelet[2539]: E0128 01:01:01.007971 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:01:01.166906 kubelet[2539]: E0128 01:01:01.166800 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:01.552442 systemd[1]: cri-containerd-356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64.scope: Deactivated successfully. Jan 28 01:01:01.553539 systemd[1]: cri-containerd-356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64.scope: Consumed 1.660s CPU time. Jan 28 01:01:01.584086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64-rootfs.mount: Deactivated successfully. Jan 28 01:01:01.590017 containerd[1461]: time="2026-01-28T01:01:01.589940174Z" level=info msg="shim disconnected" id=356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64 namespace=k8s.io Jan 28 01:01:01.590497 containerd[1461]: time="2026-01-28T01:01:01.590018715Z" level=warning msg="cleaning up after shim disconnected" id=356783216f2d6fbc686ddc230fa2b2366794f36486d2d8bbe761801ba0185b64 namespace=k8s.io Jan 28 01:01:01.590497 containerd[1461]: time="2026-01-28T01:01:01.590031227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:01:01.604219 kubelet[2539]: I0128 01:01:01.604175 2539 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:01:01.655696 kubelet[2539]: I0128 01:01:01.652688 2539 status_manager.go:890] "Failed to get status for pod" podUID="1dbe8210-8e19-4cb9-afc1-01b9b6c96960" pod="calico-system/whisker-76c797496d-tfvr4" err="pods \"whisker-76c797496d-tfvr4\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Jan 28 01:01:01.655696 kubelet[2539]: W0128 01:01:01.652823 2539 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 28 01:01:01.655696 kubelet[2539]: E0128 01:01:01.652849 2539 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 01:01:01.655696 kubelet[2539]: W0128 01:01:01.652933 2539 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 28 01:01:01.655696 kubelet[2539]: E0128 01:01:01.653083 2539 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 01:01:01.660092 systemd[1]: Created slice kubepods-besteffort-pod1dbe8210_8e19_4cb9_afc1_01b9b6c96960.slice - libcontainer container kubepods-besteffort-pod1dbe8210_8e19_4cb9_afc1_01b9b6c96960.slice. Jan 28 01:01:01.667441 kubelet[2539]: W0128 01:01:01.665280 2539 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 28 01:01:01.671636 kubelet[2539]: E0128 01:01:01.667961 2539 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 01:01:01.671636 kubelet[2539]: W0128 01:01:01.667251 2539 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jan 28 01:01:01.671636 kubelet[2539]: E0128 01:01:01.668013 2539 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 01:01:01.671636 kubelet[2539]: W0128 01:01:01.668040 2539 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 28 01:01:01.671636 kubelet[2539]: E0128 01:01:01.668242 2539 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 01:01:01.672694 kubelet[2539]: W0128 01:01:01.669769 2539 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jan 28 01:01:01.672694 kubelet[2539]: E0128 01:01:01.669807 2539 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 01:01:01.672694 kubelet[2539]: W0128 01:01:01.671125 2539 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 28 01:01:01.672694 kubelet[2539]: E0128 01:01:01.671152 2539 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 01:01:01.680025 systemd[1]: Created slice kubepods-burstable-pod91989808_8381_42d6_9a65_8c96974c0e28.slice - libcontainer container kubepods-burstable-pod91989808_8381_42d6_9a65_8c96974c0e28.slice. Jan 28 01:01:01.691154 systemd[1]: Created slice kubepods-besteffort-pode05352ea_7146_4137_a0ba_4a0cd04f63ba.slice - libcontainer container kubepods-besteffort-pode05352ea_7146_4137_a0ba_4a0cd04f63ba.slice. Jan 28 01:01:01.704931 systemd[1]: Created slice kubepods-besteffort-pod1437c929_66a0_4403_bd2f_71d8e8195954.slice - libcontainer container kubepods-besteffort-pod1437c929_66a0_4403_bd2f_71d8e8195954.slice. Jan 28 01:01:01.722831 systemd[1]: Created slice kubepods-besteffort-pod1aa8fdb6_b1a2_4d7f_81bb_ae13794164c7.slice - libcontainer container kubepods-besteffort-pod1aa8fdb6_b1a2_4d7f_81bb_ae13794164c7.slice. Jan 28 01:01:01.734095 systemd[1]: Created slice kubepods-besteffort-pod5c69e2f3_7ee0_4e92_8e53_bbacc24ecf7f.slice - libcontainer container kubepods-besteffort-pod5c69e2f3_7ee0_4e92_8e53_bbacc24ecf7f.slice. Jan 28 01:01:01.750528 systemd[1]: Created slice kubepods-burstable-pod92332efa_a852_471d_9684_4f885e3f6360.slice - libcontainer container kubepods-burstable-pod92332efa_a852_471d_9684_4f885e3f6360.slice. Jan 28 01:01:01.770055 kubelet[2539]: I0128 01:01:01.769974 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-backend-key-pair\") pod \"whisker-76c797496d-tfvr4\" (UID: \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\") " pod="calico-system/whisker-76c797496d-tfvr4" Jan 28 01:01:01.770199 kubelet[2539]: I0128 01:01:01.770070 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1437c929-66a0-4403-bd2f-71d8e8195954-goldmane-ca-bundle\") pod \"goldmane-666569f655-nrtch\" (UID: \"1437c929-66a0-4403-bd2f-71d8e8195954\") " pod="calico-system/goldmane-666569f655-nrtch" Jan 28 01:01:01.770199 kubelet[2539]: I0128 01:01:01.770108 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f-tigera-ca-bundle\") pod \"calico-kube-controllers-776bf76bd-h6kxm\" (UID: \"5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f\") " pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" Jan 28 01:01:01.770199 kubelet[2539]: I0128 01:01:01.770138 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7-calico-apiserver-certs\") pod \"calico-apiserver-6c4fbc6c9f-7hscs\" (UID: \"1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7\") " pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" Jan 28 01:01:01.770199 kubelet[2539]: I0128 01:01:01.770169 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e05352ea-7146-4137-a0ba-4a0cd04f63ba-calico-apiserver-certs\") pod \"calico-apiserver-6c4fbc6c9f-jjn2k\" (UID: \"e05352ea-7146-4137-a0ba-4a0cd04f63ba\") " pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" Jan 28 01:01:01.770199 kubelet[2539]: I0128 01:01:01.770196 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92332efa-a852-471d-9684-4f885e3f6360-config-volume\") pod \"coredns-668d6bf9bc-vf6fz\" (UID: \"92332efa-a852-471d-9684-4f885e3f6360\") " pod="kube-system/coredns-668d6bf9bc-vf6fz" Jan 28 01:01:01.770532 kubelet[2539]: I0128 01:01:01.770224 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-ca-bundle\") pod \"whisker-76c797496d-tfvr4\" (UID: \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\") " pod="calico-system/whisker-76c797496d-tfvr4" Jan 28 01:01:01.770532 kubelet[2539]: I0128 01:01:01.770255 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1437c929-66a0-4403-bd2f-71d8e8195954-config\") pod \"goldmane-666569f655-nrtch\" (UID: \"1437c929-66a0-4403-bd2f-71d8e8195954\") " pod="calico-system/goldmane-666569f655-nrtch" Jan 28 01:01:01.770532 kubelet[2539]: I0128 01:01:01.770280 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzwv6\" (UniqueName: \"kubernetes.io/projected/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-kube-api-access-bzwv6\") pod \"whisker-76c797496d-tfvr4\" (UID: \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\") " pod="calico-system/whisker-76c797496d-tfvr4" Jan 28 01:01:01.770532 kubelet[2539]: I0128 01:01:01.770420 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zklb\" (UniqueName: \"kubernetes.io/projected/1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7-kube-api-access-4zklb\") pod \"calico-apiserver-6c4fbc6c9f-7hscs\" (UID: \"1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7\") " pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" Jan 28 01:01:01.770532 kubelet[2539]: I0128 01:01:01.770471 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2txcm\" (UniqueName: \"kubernetes.io/projected/92332efa-a852-471d-9684-4f885e3f6360-kube-api-access-2txcm\") pod \"coredns-668d6bf9bc-vf6fz\" (UID: \"92332efa-a852-471d-9684-4f885e3f6360\") " pod="kube-system/coredns-668d6bf9bc-vf6fz" Jan 28 01:01:01.770727 kubelet[2539]: I0128 01:01:01.770507 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91989808-8381-42d6-9a65-8c96974c0e28-config-volume\") pod \"coredns-668d6bf9bc-m9pm2\" (UID: \"91989808-8381-42d6-9a65-8c96974c0e28\") " pod="kube-system/coredns-668d6bf9bc-m9pm2" Jan 28 01:01:01.770727 kubelet[2539]: I0128 01:01:01.770534 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xz5b\" (UniqueName: \"kubernetes.io/projected/5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f-kube-api-access-5xz5b\") pod \"calico-kube-controllers-776bf76bd-h6kxm\" (UID: \"5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f\") " pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" Jan 28 01:01:01.770727 kubelet[2539]: I0128 01:01:01.770570 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1437c929-66a0-4403-bd2f-71d8e8195954-goldmane-key-pair\") pod \"goldmane-666569f655-nrtch\" (UID: \"1437c929-66a0-4403-bd2f-71d8e8195954\") " pod="calico-system/goldmane-666569f655-nrtch" Jan 28 01:01:01.770727 kubelet[2539]: I0128 01:01:01.770597 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzfx\" (UniqueName: \"kubernetes.io/projected/1437c929-66a0-4403-bd2f-71d8e8195954-kube-api-access-vdzfx\") pod \"goldmane-666569f655-nrtch\" (UID: \"1437c929-66a0-4403-bd2f-71d8e8195954\") " pod="calico-system/goldmane-666569f655-nrtch" Jan 28 01:01:01.770727 kubelet[2539]: I0128 01:01:01.770626 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl587\" (UniqueName: \"kubernetes.io/projected/91989808-8381-42d6-9a65-8c96974c0e28-kube-api-access-vl587\") pod \"coredns-668d6bf9bc-m9pm2\" (UID: \"91989808-8381-42d6-9a65-8c96974c0e28\") " pod="kube-system/coredns-668d6bf9bc-m9pm2" Jan 28 01:01:01.770906 kubelet[2539]: I0128 01:01:01.770652 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdkl2\" (UniqueName: \"kubernetes.io/projected/e05352ea-7146-4137-a0ba-4a0cd04f63ba-kube-api-access-qdkl2\") pod \"calico-apiserver-6c4fbc6c9f-jjn2k\" (UID: \"e05352ea-7146-4137-a0ba-4a0cd04f63ba\") " pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" Jan 28 01:01:01.985900 kubelet[2539]: E0128 01:01:01.985805 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:01.986924 containerd[1461]: time="2026-01-28T01:01:01.986701324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9pm2,Uid:91989808-8381-42d6-9a65-8c96974c0e28,Namespace:kube-system,Attempt:0,}" Jan 28 01:01:02.046856 containerd[1461]: time="2026-01-28T01:01:02.046783839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-776bf76bd-h6kxm,Uid:5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f,Namespace:calico-system,Attempt:0,}" Jan 28 01:01:02.055678 kubelet[2539]: E0128 01:01:02.055060 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:02.057895 containerd[1461]: time="2026-01-28T01:01:02.057803997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf6fz,Uid:92332efa-a852-471d-9684-4f885e3f6360,Namespace:kube-system,Attempt:0,}" Jan 28 01:01:02.120401 containerd[1461]: time="2026-01-28T01:01:02.119944445Z" level=error msg="Failed to destroy network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.120967 containerd[1461]: time="2026-01-28T01:01:02.120891067Z" level=error msg="encountered an error cleaning up failed sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.122522 containerd[1461]: time="2026-01-28T01:01:02.122461454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9pm2,Uid:91989808-8381-42d6-9a65-8c96974c0e28,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.123575 kubelet[2539]: E0128 01:01:02.123450 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.123867 kubelet[2539]: E0128 01:01:02.123639 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m9pm2" Jan 28 01:01:02.123867 kubelet[2539]: E0128 01:01:02.123685 2539 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m9pm2" Jan 28 01:01:02.123867 kubelet[2539]: E0128 01:01:02.123753 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m9pm2_kube-system(91989808-8381-42d6-9a65-8c96974c0e28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m9pm2_kube-system(91989808-8381-42d6-9a65-8c96974c0e28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m9pm2" podUID="91989808-8381-42d6-9a65-8c96974c0e28" Jan 28 01:01:02.175175 kubelet[2539]: E0128 01:01:02.174762 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:02.176024 kubelet[2539]: I0128 01:01:02.175922 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:02.176647 containerd[1461]: time="2026-01-28T01:01:02.176225023Z" level=error msg="Failed to destroy network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.178985 containerd[1461]: time="2026-01-28T01:01:02.176531415Z" level=info msg="StopPodSandbox for \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\"" Jan 28 01:01:02.178985 containerd[1461]: time="2026-01-28T01:01:02.176991331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:01:02.178985 containerd[1461]: time="2026-01-28T01:01:02.177089126Z" level=info msg="Ensure that sandbox 79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68 in task-service has been cleanup successfully" Jan 28 01:01:02.179970 containerd[1461]: time="2026-01-28T01:01:02.179450905Z" level=error msg="Failed to destroy network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.180663 containerd[1461]: time="2026-01-28T01:01:02.180586680Z" level=error msg="encountered an error cleaning up failed sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.180721 containerd[1461]: time="2026-01-28T01:01:02.180671612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf6fz,Uid:92332efa-a852-471d-9684-4f885e3f6360,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.181005 kubelet[2539]: E0128 01:01:02.180910 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.181099 kubelet[2539]: E0128 01:01:02.181022 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vf6fz" Jan 28 01:01:02.181099 kubelet[2539]: E0128 01:01:02.181055 2539 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vf6fz" Jan 28 01:01:02.181155 kubelet[2539]: E0128 01:01:02.181097 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vf6fz_kube-system(92332efa-a852-471d-9684-4f885e3f6360)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vf6fz_kube-system(92332efa-a852-471d-9684-4f885e3f6360)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vf6fz" podUID="92332efa-a852-471d-9684-4f885e3f6360" Jan 28 01:01:02.184415 containerd[1461]: time="2026-01-28T01:01:02.182536038Z" level=error msg="encountered an error cleaning up failed sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.184415 containerd[1461]: time="2026-01-28T01:01:02.182612866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-776bf76bd-h6kxm,Uid:5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.184565 kubelet[2539]: E0128 01:01:02.182839 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.184565 kubelet[2539]: E0128 01:01:02.182873 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" Jan 28 01:01:02.184565 kubelet[2539]: E0128 01:01:02.182937 2539 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" Jan 28 01:01:02.184647 kubelet[2539]: E0128 01:01:02.183027 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-776bf76bd-h6kxm_calico-system(5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-776bf76bd-h6kxm_calico-system(5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:01:02.235005 containerd[1461]: time="2026-01-28T01:01:02.234919401Z" level=error msg="StopPodSandbox for \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\" failed" error="failed to destroy network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:02.235470 kubelet[2539]: E0128 01:01:02.235393 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:02.235605 kubelet[2539]: E0128 01:01:02.235493 2539 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68"} Jan 28 01:01:02.235664 kubelet[2539]: E0128 01:01:02.235601 2539 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91989808-8381-42d6-9a65-8c96974c0e28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:01:02.235664 kubelet[2539]: E0128 01:01:02.235638 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91989808-8381-42d6-9a65-8c96974c0e28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m9pm2" podUID="91989808-8381-42d6-9a65-8c96974c0e28" Jan 28 01:01:02.585012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68-shm.mount: Deactivated successfully. Jan 28 01:01:02.873185 kubelet[2539]: E0128 01:01:02.872964 2539 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 28 01:01:02.873185 kubelet[2539]: E0128 01:01:02.873097 2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-ca-bundle podName:1dbe8210-8e19-4cb9-afc1-01b9b6c96960 nodeName:}" failed. No retries permitted until 2026-01-28 01:01:03.373044577 +0000 UTC m=+43.634263590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-ca-bundle") pod "whisker-76c797496d-tfvr4" (UID: "1dbe8210-8e19-4cb9-afc1-01b9b6c96960") : failed to sync configmap cache: timed out waiting for the condition Jan 28 01:01:02.900390 containerd[1461]: time="2026-01-28T01:01:02.899134512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4fbc6c9f-jjn2k,Uid:e05352ea-7146-4137-a0ba-4a0cd04f63ba,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:01:02.916573 containerd[1461]: time="2026-01-28T01:01:02.916434251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nrtch,Uid:1437c929-66a0-4403-bd2f-71d8e8195954,Namespace:calico-system,Attempt:0,}" Jan 28 01:01:02.929058 containerd[1461]: time="2026-01-28T01:01:02.928942336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4fbc6c9f-7hscs,Uid:1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:01:03.015937 systemd[1]: Created slice kubepods-besteffort-pode2bacd2d_0f8b_4cdb_8bbf_610d4ec6ce11.slice - libcontainer container kubepods-besteffort-pode2bacd2d_0f8b_4cdb_8bbf_610d4ec6ce11.slice. Jan 28 01:01:03.019162 containerd[1461]: time="2026-01-28T01:01:03.019034904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zzx59,Uid:e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11,Namespace:calico-system,Attempt:0,}" Jan 28 01:01:03.085815 containerd[1461]: time="2026-01-28T01:01:03.085619405Z" level=error msg="Failed to destroy network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.086625 containerd[1461]: time="2026-01-28T01:01:03.086532772Z" level=error msg="encountered an error cleaning up failed sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.086715 containerd[1461]: time="2026-01-28T01:01:03.086645595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4fbc6c9f-jjn2k,Uid:e05352ea-7146-4137-a0ba-4a0cd04f63ba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.088190 kubelet[2539]: E0128 01:01:03.086918 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.088190 kubelet[2539]: E0128 01:01:03.086993 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" Jan 28 01:01:03.088190 kubelet[2539]: E0128 01:01:03.087032 2539 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" Jan 28 01:01:03.088952 kubelet[2539]: E0128 01:01:03.087089 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4fbc6c9f-jjn2k_calico-apiserver(e05352ea-7146-4137-a0ba-4a0cd04f63ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4fbc6c9f-jjn2k_calico-apiserver(e05352ea-7146-4137-a0ba-4a0cd04f63ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:01:03.118155 containerd[1461]: time="2026-01-28T01:01:03.117810548Z" level=error msg="Failed to destroy network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.119457 containerd[1461]: time="2026-01-28T01:01:03.118928753Z" level=error msg="encountered an error cleaning up failed sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.119457 containerd[1461]: time="2026-01-28T01:01:03.118983513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nrtch,Uid:1437c929-66a0-4403-bd2f-71d8e8195954,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.123735 kubelet[2539]: E0128 01:01:03.123606 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.123735 kubelet[2539]: E0128 01:01:03.123689 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nrtch" Jan 28 01:01:03.123735 kubelet[2539]: E0128 01:01:03.123710 2539 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nrtch" Jan 28 01:01:03.125451 kubelet[2539]: E0128 01:01:03.123771 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nrtch_calico-system(1437c929-66a0-4403-bd2f-71d8e8195954)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nrtch_calico-system(1437c929-66a0-4403-bd2f-71d8e8195954)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:01:03.156730 containerd[1461]: time="2026-01-28T01:01:03.156629215Z" level=error msg="Failed to destroy network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.159453 containerd[1461]: time="2026-01-28T01:01:03.157319479Z" level=error msg="encountered an error cleaning up failed sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.159453 containerd[1461]: time="2026-01-28T01:01:03.157505745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4fbc6c9f-7hscs,Uid:1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.159727 kubelet[2539]: E0128 01:01:03.157801 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.159727 kubelet[2539]: E0128 01:01:03.157862 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" Jan 28 01:01:03.159727 kubelet[2539]: E0128 01:01:03.157883 2539 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" Jan 28 01:01:03.159897 kubelet[2539]: E0128 01:01:03.157932 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c4fbc6c9f-7hscs_calico-apiserver(1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c4fbc6c9f-7hscs_calico-apiserver(1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:01:03.181839 kubelet[2539]: I0128 01:01:03.179999 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:03.182843 containerd[1461]: time="2026-01-28T01:01:03.182740671Z" level=info msg="StopPodSandbox for \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\"" Jan 28 01:01:03.185469 containerd[1461]: time="2026-01-28T01:01:03.182990913Z" level=info msg="Ensure that sandbox 1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449 in task-service has been cleanup successfully" Jan 28 01:01:03.185469 containerd[1461]: time="2026-01-28T01:01:03.184323906Z" level=info msg="StopPodSandbox for \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\"" Jan 28 01:01:03.185469 containerd[1461]: time="2026-01-28T01:01:03.184582993Z" level=info msg="Ensure that sandbox 432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4 in task-service has been cleanup successfully" Jan 28 01:01:03.185623 kubelet[2539]: I0128 01:01:03.183804 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:03.188175 kubelet[2539]: I0128 01:01:03.188032 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:03.189904 containerd[1461]: time="2026-01-28T01:01:03.189783389Z" level=info msg="StopPodSandbox for \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\"" Jan 28 01:01:03.190107 containerd[1461]: time="2026-01-28T01:01:03.189998236Z" level=info msg="Ensure that sandbox 23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb in task-service has been cleanup successfully" Jan 28 01:01:03.213878 kubelet[2539]: I0128 01:01:03.213132 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:03.214722 containerd[1461]: time="2026-01-28T01:01:03.214551834Z" level=error msg="Failed to destroy network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.215574 containerd[1461]: time="2026-01-28T01:01:03.215512486Z" level=error msg="encountered an error cleaning up failed sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.215827 containerd[1461]: time="2026-01-28T01:01:03.215600585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zzx59,Uid:e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.216061 containerd[1461]: time="2026-01-28T01:01:03.215871856Z" level=info msg="StopPodSandbox for \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\"" Jan 28 01:01:03.216648 containerd[1461]: time="2026-01-28T01:01:03.216597464Z" level=info msg="Ensure that sandbox eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d in task-service has been cleanup successfully" Jan 28 01:01:03.219601 kubelet[2539]: E0128 01:01:03.219550 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.219680 kubelet[2539]: E0128 01:01:03.219615 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zzx59" Jan 28 01:01:03.219680 kubelet[2539]: E0128 01:01:03.219646 2539 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zzx59" Jan 28 01:01:03.219815 kubelet[2539]: E0128 01:01:03.219692 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zzx59_calico-system(e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zzx59_calico-system(e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:01:03.223290 kubelet[2539]: I0128 01:01:03.222799 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:03.224451 containerd[1461]: time="2026-01-28T01:01:03.224331698Z" level=info msg="StopPodSandbox for \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\"" Jan 28 01:01:03.226416 containerd[1461]: time="2026-01-28T01:01:03.225459581Z" level=info msg="Ensure that sandbox 097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847 in task-service has been cleanup successfully" Jan 28 01:01:03.281913 containerd[1461]: time="2026-01-28T01:01:03.280482131Z" level=error msg="StopPodSandbox for \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\" failed" error="failed to destroy network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.286340 kubelet[2539]: E0128 01:01:03.286291 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:03.286612 kubelet[2539]: E0128 01:01:03.286574 2539 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449"} Jan 28 01:01:03.286747 containerd[1461]: time="2026-01-28T01:01:03.286565391Z" level=error msg="StopPodSandbox for \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\" failed" error="failed to destroy network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.286866 kubelet[2539]: E0128 01:01:03.286840 2539 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1437c929-66a0-4403-bd2f-71d8e8195954\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:01:03.287103 kubelet[2539]: E0128 01:01:03.287071 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1437c929-66a0-4403-bd2f-71d8e8195954\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:01:03.287678 kubelet[2539]: E0128 01:01:03.287477 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:03.287678 kubelet[2539]: E0128 01:01:03.287535 2539 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4"} Jan 28 01:01:03.287678 kubelet[2539]: E0128 01:01:03.287561 2539 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92332efa-a852-471d-9684-4f885e3f6360\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:01:03.287678 kubelet[2539]: E0128 01:01:03.287578 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92332efa-a852-471d-9684-4f885e3f6360\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vf6fz" podUID="92332efa-a852-471d-9684-4f885e3f6360" Jan 28 01:01:03.302155 containerd[1461]: time="2026-01-28T01:01:03.301741602Z" level=error msg="StopPodSandbox for \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\" failed" error="failed to destroy network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.302663 kubelet[2539]: E0128 01:01:03.302621 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:03.302749 kubelet[2539]: E0128 01:01:03.302681 2539 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb"} Jan 28 01:01:03.302749 kubelet[2539]: E0128 01:01:03.302728 2539 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:01:03.302924 kubelet[2539]: E0128 01:01:03.302769 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:01:03.305901 containerd[1461]: time="2026-01-28T01:01:03.304880542Z" level=error msg="StopPodSandbox for \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\" failed" error="failed to destroy network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.305983 kubelet[2539]: E0128 01:01:03.305249 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:03.305983 kubelet[2539]: E0128 01:01:03.305328 2539 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d"} Jan 28 01:01:03.305983 kubelet[2539]: E0128 01:01:03.305443 2539 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:01:03.305983 kubelet[2539]: E0128 01:01:03.305476 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:01:03.306453 kubelet[2539]: E0128 01:01:03.306291 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:03.306453 kubelet[2539]: E0128 01:01:03.306331 2539 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847"} Jan 28 01:01:03.306550 containerd[1461]: time="2026-01-28T01:01:03.306020515Z" level=error msg="StopPodSandbox for \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\" failed" error="failed to destroy network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.306594 kubelet[2539]: E0128 01:01:03.306448 2539 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e05352ea-7146-4137-a0ba-4a0cd04f63ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:01:03.306594 kubelet[2539]: E0128 01:01:03.306479 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e05352ea-7146-4137-a0ba-4a0cd04f63ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:01:03.467822 containerd[1461]: time="2026-01-28T01:01:03.467642384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c797496d-tfvr4,Uid:1dbe8210-8e19-4cb9-afc1-01b9b6c96960,Namespace:calico-system,Attempt:0,}" Jan 28 01:01:03.563499 containerd[1461]: time="2026-01-28T01:01:03.563434908Z" level=error msg="Failed to destroy network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.564032 containerd[1461]: time="2026-01-28T01:01:03.563961207Z" level=error msg="encountered an error cleaning up failed sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.564069 containerd[1461]: time="2026-01-28T01:01:03.564039518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c797496d-tfvr4,Uid:1dbe8210-8e19-4cb9-afc1-01b9b6c96960,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.564990 kubelet[2539]: E0128 01:01:03.564892 2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:03.565277 kubelet[2539]: E0128 01:01:03.565131 2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76c797496d-tfvr4" Jan 28 01:01:03.565277 kubelet[2539]: E0128 01:01:03.565159 2539 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76c797496d-tfvr4" Jan 28 01:01:03.565642 kubelet[2539]: E0128 01:01:03.565607 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76c797496d-tfvr4_calico-system(1dbe8210-8e19-4cb9-afc1-01b9b6c96960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76c797496d-tfvr4_calico-system(1dbe8210-8e19-4cb9-afc1-01b9b6c96960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76c797496d-tfvr4" podUID="1dbe8210-8e19-4cb9-afc1-01b9b6c96960" Jan 28 01:01:04.227590 kubelet[2539]: I0128 01:01:04.227526 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:04.228601 containerd[1461]: time="2026-01-28T01:01:04.228438668Z" level=info msg="StopPodSandbox for \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\"" Jan 28 01:01:04.229022 kubelet[2539]: I0128 01:01:04.228798 2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:04.232707 containerd[1461]: time="2026-01-28T01:01:04.229613790Z" level=info msg="Ensure that sandbox 7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43 in task-service has been cleanup successfully" Jan 28 01:01:04.233717 containerd[1461]: time="2026-01-28T01:01:04.233493012Z" level=info msg="StopPodSandbox for \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\"" Jan 28 01:01:04.233799 containerd[1461]: time="2026-01-28T01:01:04.233728126Z" level=info msg="Ensure that sandbox 6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739 in task-service has been cleanup successfully" Jan 28 01:01:04.279113 containerd[1461]: time="2026-01-28T01:01:04.279014943Z" level=error msg="StopPodSandbox for \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\" failed" error="failed to destroy network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:04.279589 kubelet[2539]: E0128 01:01:04.279522 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:04.279711 kubelet[2539]: E0128 01:01:04.279616 2539 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739"} Jan 28 01:01:04.279711 kubelet[2539]: E0128 01:01:04.279669 2539 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:01:04.279822 kubelet[2539]: E0128 01:01:04.279701 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76c797496d-tfvr4" podUID="1dbe8210-8e19-4cb9-afc1-01b9b6c96960" Jan 28 01:01:04.280052 containerd[1461]: time="2026-01-28T01:01:04.279980587Z" level=error msg="StopPodSandbox for \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\" failed" error="failed to destroy network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:01:04.280494 kubelet[2539]: E0128 01:01:04.280317 2539 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:04.280494 kubelet[2539]: E0128 01:01:04.280449 2539 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43"} Jan 28 01:01:04.280494 kubelet[2539]: E0128 01:01:04.280483 2539 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:01:04.280677 kubelet[2539]: E0128 01:01:04.280513 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:01:09.264518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493505650.mount: Deactivated successfully. Jan 28 01:01:09.480047 containerd[1461]: time="2026-01-28T01:01:09.479947932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:01:09.489008 containerd[1461]: time="2026-01-28T01:01:09.488929528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 28 01:01:09.494683 containerd[1461]: time="2026-01-28T01:01:09.494571383Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:01:09.498499 containerd[1461]: time="2026-01-28T01:01:09.498439253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:01:09.501287 containerd[1461]: time="2026-01-28T01:01:09.501169408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.324104011s" Jan 28 01:01:09.501287 containerd[1461]: time="2026-01-28T01:01:09.501238965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 01:01:09.515945 containerd[1461]: time="2026-01-28T01:01:09.515674715Z" level=info msg="CreateContainer within sandbox \"116020c07182b5a5de4cb8ccf58bf5c013e9964c73d0bdd80e9120ab527fa6df\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:01:09.540339 containerd[1461]: time="2026-01-28T01:01:09.540259631Z" level=info msg="CreateContainer within sandbox \"116020c07182b5a5de4cb8ccf58bf5c013e9964c73d0bdd80e9120ab527fa6df\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7da207cafac4d83c149e9fd1a1f0a1caa29e645c4907f8e717c9295449eda791\"" Jan 28 01:01:09.543406 containerd[1461]: time="2026-01-28T01:01:09.541207692Z" level=info msg="StartContainer for \"7da207cafac4d83c149e9fd1a1f0a1caa29e645c4907f8e717c9295449eda791\"" Jan 28 01:01:09.629640 systemd[1]: Started cri-containerd-7da207cafac4d83c149e9fd1a1f0a1caa29e645c4907f8e717c9295449eda791.scope - libcontainer container 7da207cafac4d83c149e9fd1a1f0a1caa29e645c4907f8e717c9295449eda791. Jan 28 01:01:09.677525 containerd[1461]: time="2026-01-28T01:01:09.677474693Z" level=info msg="StartContainer for \"7da207cafac4d83c149e9fd1a1f0a1caa29e645c4907f8e717c9295449eda791\" returns successfully" Jan 28 01:01:09.814271 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:01:09.815341 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:01:09.934924 containerd[1461]: time="2026-01-28T01:01:09.934727626Z" level=info msg="StopPodSandbox for \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\"" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.039 [INFO][3810] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.040 [INFO][3810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" iface="eth0" netns="/var/run/netns/cni-2223d8d6-9e5f-eba6-cb38-ca26f6f2d3a1" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.041 [INFO][3810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" iface="eth0" netns="/var/run/netns/cni-2223d8d6-9e5f-eba6-cb38-ca26f6f2d3a1" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.042 [INFO][3810] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" iface="eth0" netns="/var/run/netns/cni-2223d8d6-9e5f-eba6-cb38-ca26f6f2d3a1" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.042 [INFO][3810] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.042 [INFO][3810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.155 [INFO][3821] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" HandleID="k8s-pod-network.6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Workload="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.156 [INFO][3821] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.156 [INFO][3821] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.172 [WARNING][3821] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" HandleID="k8s-pod-network.6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Workload="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.172 [INFO][3821] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" HandleID="k8s-pod-network.6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Workload="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.175 [INFO][3821] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:10.181588 containerd[1461]: 2026-01-28 01:01:10.177 [INFO][3810] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:10.182322 containerd[1461]: time="2026-01-28T01:01:10.181692789Z" level=info msg="TearDown network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\" successfully" Jan 28 01:01:10.182322 containerd[1461]: time="2026-01-28T01:01:10.181732011Z" level=info msg="StopPodSandbox for \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\" returns successfully" Jan 28 01:01:10.267640 systemd[1]: run-netns-cni\x2d2223d8d6\x2d9e5f\x2deba6\x2dcb38\x2dca26f6f2d3a1.mount: Deactivated successfully. Jan 28 01:01:10.291322 kubelet[2539]: E0128 01:01:10.291232 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:10.317312 kubelet[2539]: I0128 01:01:10.317185 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-smgl8" podStartSLOduration=2.154148889 podStartE2EDuration="20.317169079s" podCreationTimestamp="2026-01-28 01:00:50 +0000 UTC" firstStartedPulling="2026-01-28 01:00:51.339189218 +0000 UTC m=+31.600408232" lastFinishedPulling="2026-01-28 01:01:09.502209409 +0000 UTC m=+49.763428422" observedRunningTime="2026-01-28 01:01:10.31695069 +0000 UTC m=+50.578169723" watchObservedRunningTime="2026-01-28 01:01:10.317169079 +0000 UTC m=+50.578388091" Jan 28 01:01:10.343790 kubelet[2539]: I0128 01:01:10.343612 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-backend-key-pair\") pod \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\" (UID: \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\") " Jan 28 01:01:10.343790 kubelet[2539]: I0128 01:01:10.343714 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-ca-bundle\") pod \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\" (UID: \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\") " Jan 28 01:01:10.343790 kubelet[2539]: I0128 01:01:10.343750 2539 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzwv6\" (UniqueName: \"kubernetes.io/projected/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-kube-api-access-bzwv6\") pod \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\" (UID: \"1dbe8210-8e19-4cb9-afc1-01b9b6c96960\") " Jan 28 01:01:10.345069 kubelet[2539]: I0128 01:01:10.344961 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1dbe8210-8e19-4cb9-afc1-01b9b6c96960" (UID: "1dbe8210-8e19-4cb9-afc1-01b9b6c96960"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:01:10.353175 kubelet[2539]: I0128 01:01:10.351587 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-kube-api-access-bzwv6" (OuterVolumeSpecName: "kube-api-access-bzwv6") pod "1dbe8210-8e19-4cb9-afc1-01b9b6c96960" (UID: "1dbe8210-8e19-4cb9-afc1-01b9b6c96960"). InnerVolumeSpecName "kube-api-access-bzwv6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:01:10.353175 kubelet[2539]: I0128 01:01:10.351896 2539 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1dbe8210-8e19-4cb9-afc1-01b9b6c96960" (UID: "1dbe8210-8e19-4cb9-afc1-01b9b6c96960"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:01:10.353167 systemd[1]: var-lib-kubelet-pods-1dbe8210\x2d8e19\x2d4cb9\x2dafc1\x2d01b9b6c96960-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:01:10.353270 systemd[1]: var-lib-kubelet-pods-1dbe8210\x2d8e19\x2d4cb9\x2dafc1\x2d01b9b6c96960-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbzwv6.mount: Deactivated successfully. Jan 28 01:01:10.445098 kubelet[2539]: I0128 01:01:10.444789 2539 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bzwv6\" (UniqueName: \"kubernetes.io/projected/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-kube-api-access-bzwv6\") on node \"localhost\" DevicePath \"\"" Jan 28 01:01:10.445098 kubelet[2539]: I0128 01:01:10.444935 2539 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 28 01:01:10.445098 kubelet[2539]: I0128 01:01:10.444953 2539 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dbe8210-8e19-4cb9-afc1-01b9b6c96960-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 28 01:01:10.600904 systemd[1]: Removed slice kubepods-besteffort-pod1dbe8210_8e19_4cb9_afc1_01b9b6c96960.slice - libcontainer container kubepods-besteffort-pod1dbe8210_8e19_4cb9_afc1_01b9b6c96960.slice. Jan 28 01:01:10.674552 systemd[1]: Created slice kubepods-besteffort-pod347634cc_6ade_443d_805f_7f8a4ce956c5.slice - libcontainer container kubepods-besteffort-pod347634cc_6ade_443d_805f_7f8a4ce956c5.slice. Jan 28 01:01:10.747314 kubelet[2539]: I0128 01:01:10.746956 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/347634cc-6ade-443d-805f-7f8a4ce956c5-whisker-ca-bundle\") pod \"whisker-6df8cd6fbb-9dvqd\" (UID: \"347634cc-6ade-443d-805f-7f8a4ce956c5\") " pod="calico-system/whisker-6df8cd6fbb-9dvqd" Jan 28 01:01:10.747314 kubelet[2539]: I0128 01:01:10.747034 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/347634cc-6ade-443d-805f-7f8a4ce956c5-whisker-backend-key-pair\") pod \"whisker-6df8cd6fbb-9dvqd\" (UID: \"347634cc-6ade-443d-805f-7f8a4ce956c5\") " pod="calico-system/whisker-6df8cd6fbb-9dvqd" Jan 28 01:01:10.747314 kubelet[2539]: I0128 01:01:10.747054 2539 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbmm8\" (UniqueName: \"kubernetes.io/projected/347634cc-6ade-443d-805f-7f8a4ce956c5-kube-api-access-cbmm8\") pod \"whisker-6df8cd6fbb-9dvqd\" (UID: \"347634cc-6ade-443d-805f-7f8a4ce956c5\") " pod="calico-system/whisker-6df8cd6fbb-9dvqd" Jan 28 01:01:10.980623 containerd[1461]: time="2026-01-28T01:01:10.980583191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df8cd6fbb-9dvqd,Uid:347634cc-6ade-443d-805f-7f8a4ce956c5,Namespace:calico-system,Attempt:0,}" Jan 28 01:01:11.156889 systemd-networkd[1393]: cali09fc5e9a16f: Link UP Jan 28 01:01:11.157230 systemd-networkd[1393]: cali09fc5e9a16f: Gained carrier Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.033 [INFO][3867] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.049 [INFO][3867] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0 whisker-6df8cd6fbb- calico-system 347634cc-6ade-443d-805f-7f8a4ce956c5 926 0 2026-01-28 01:01:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6df8cd6fbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6df8cd6fbb-9dvqd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali09fc5e9a16f [] [] }} ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Namespace="calico-system" Pod="whisker-6df8cd6fbb-9dvqd" WorkloadEndpoint="localhost-k8s-whisker--6df8cd6fbb--9dvqd-" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.050 [INFO][3867] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Namespace="calico-system" Pod="whisker-6df8cd6fbb-9dvqd" WorkloadEndpoint="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.087 [INFO][3880] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" HandleID="k8s-pod-network.e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Workload="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.088 [INFO][3880] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" HandleID="k8s-pod-network.e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Workload="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6df8cd6fbb-9dvqd", "timestamp":"2026-01-28 01:01:11.087884119 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.088 [INFO][3880] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.088 [INFO][3880] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.088 [INFO][3880] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.101 [INFO][3880] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" host="localhost" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.111 [INFO][3880] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.119 [INFO][3880] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.122 [INFO][3880] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.126 [INFO][3880] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.126 [INFO][3880] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" host="localhost" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.128 [INFO][3880] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513 Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.133 [INFO][3880] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" host="localhost" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.140 [INFO][3880] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" host="localhost" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.140 [INFO][3880] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" host="localhost" Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.140 [INFO][3880] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:11.182572 containerd[1461]: 2026-01-28 01:01:11.140 [INFO][3880] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" HandleID="k8s-pod-network.e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Workload="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" Jan 28 01:01:11.184647 containerd[1461]: 2026-01-28 01:01:11.143 [INFO][3867] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Namespace="calico-system" Pod="whisker-6df8cd6fbb-9dvqd" WorkloadEndpoint="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0", GenerateName:"whisker-6df8cd6fbb-", Namespace:"calico-system", SelfLink:"", UID:"347634cc-6ade-443d-805f-7f8a4ce956c5", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6df8cd6fbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6df8cd6fbb-9dvqd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09fc5e9a16f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:11.184647 containerd[1461]: 2026-01-28 01:01:11.143 [INFO][3867] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Namespace="calico-system" Pod="whisker-6df8cd6fbb-9dvqd" WorkloadEndpoint="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" Jan 28 01:01:11.184647 containerd[1461]: 2026-01-28 01:01:11.143 [INFO][3867] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09fc5e9a16f ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Namespace="calico-system" Pod="whisker-6df8cd6fbb-9dvqd" WorkloadEndpoint="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" Jan 28 01:01:11.184647 containerd[1461]: 2026-01-28 01:01:11.157 [INFO][3867] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Namespace="calico-system" Pod="whisker-6df8cd6fbb-9dvqd" WorkloadEndpoint="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" Jan 28 01:01:11.184647 containerd[1461]: 2026-01-28 01:01:11.158 [INFO][3867] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Namespace="calico-system" Pod="whisker-6df8cd6fbb-9dvqd" WorkloadEndpoint="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0", GenerateName:"whisker-6df8cd6fbb-", Namespace:"calico-system", SelfLink:"", UID:"347634cc-6ade-443d-805f-7f8a4ce956c5", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6df8cd6fbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513", Pod:"whisker-6df8cd6fbb-9dvqd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali09fc5e9a16f", MAC:"32:41:9b:dc:65:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:11.184647 containerd[1461]: 2026-01-28 01:01:11.173 [INFO][3867] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513" Namespace="calico-system" Pod="whisker-6df8cd6fbb-9dvqd" WorkloadEndpoint="localhost-k8s-whisker--6df8cd6fbb--9dvqd-eth0" Jan 28 01:01:11.240708 containerd[1461]: time="2026-01-28T01:01:11.240554213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:11.240708 containerd[1461]: time="2026-01-28T01:01:11.240669003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:11.240945 containerd[1461]: time="2026-01-28T01:01:11.240690913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:11.244883 containerd[1461]: time="2026-01-28T01:01:11.244125821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:11.336653 systemd[1]: run-containerd-runc-k8s.io-e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513-runc.qIbhKV.mount: Deactivated successfully. Jan 28 01:01:11.345626 systemd[1]: Started cri-containerd-e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513.scope - libcontainer container e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513. Jan 28 01:01:11.363074 kubelet[2539]: E0128 01:01:11.363002 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:11.450027 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:01:11.550713 containerd[1461]: time="2026-01-28T01:01:11.550625301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df8cd6fbb-9dvqd,Uid:347634cc-6ade-443d-805f-7f8a4ce956c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"e810f72eb9450ade55ce852f4165076264d06a5ac4bcd230d4aff8357ceaa513\"" Jan 28 01:01:11.555759 containerd[1461]: time="2026-01-28T01:01:11.555104892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:01:11.658202 containerd[1461]: time="2026-01-28T01:01:11.657982770Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:11.662000 containerd[1461]: time="2026-01-28T01:01:11.660537541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:01:11.693671 containerd[1461]: time="2026-01-28T01:01:11.667293014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:01:11.694418 kubelet[2539]: E0128 01:01:11.694338 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:11.694611 kubelet[2539]: E0128 01:01:11.694593 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:11.697105 kubelet[2539]: E0128 01:01:11.696710 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:01bd545dd6a247eda48fc2f662e849f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cbmm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df8cd6fbb-9dvqd_calico-system(347634cc-6ade-443d-805f-7f8a4ce956c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:11.700128 containerd[1461]: time="2026-01-28T01:01:11.700013499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:01:11.718503 kernel: bpftool[4093]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 01:01:11.773414 containerd[1461]: time="2026-01-28T01:01:11.773318701Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:11.775593 containerd[1461]: time="2026-01-28T01:01:11.775498768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:01:11.775791 containerd[1461]: time="2026-01-28T01:01:11.775624202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:11.776012 kubelet[2539]: E0128 01:01:11.775883 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:11.776012 kubelet[2539]: E0128 01:01:11.775973 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:11.776270 kubelet[2539]: E0128 01:01:11.776128 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbmm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df8cd6fbb-9dvqd_calico-system(347634cc-6ade-443d-805f-7f8a4ce956c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:11.778529 kubelet[2539]: E0128 01:01:11.778420 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df8cd6fbb-9dvqd" podUID="347634cc-6ade-443d-805f-7f8a4ce956c5" Jan 28 01:01:11.989308 systemd-networkd[1393]: vxlan.calico: Link UP Jan 28 01:01:11.989608 systemd-networkd[1393]: vxlan.calico: Gained carrier Jan 28 01:01:12.011330 kubelet[2539]: I0128 01:01:12.011255 2539 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dbe8210-8e19-4cb9-afc1-01b9b6c96960" path="/var/lib/kubelet/pods/1dbe8210-8e19-4cb9-afc1-01b9b6c96960/volumes" Jan 28 01:01:12.367974 kubelet[2539]: E0128 01:01:12.367884 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:12.368812 kubelet[2539]: E0128 01:01:12.368666 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df8cd6fbb-9dvqd" podUID="347634cc-6ade-443d-805f-7f8a4ce956c5" Jan 28 01:01:12.405036 systemd[1]: run-containerd-runc-k8s.io-7da207cafac4d83c149e9fd1a1f0a1caa29e645c4907f8e717c9295449eda791-runc.axGRH5.mount: Deactivated successfully. Jan 28 01:01:13.162946 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Jan 28 01:01:13.601306 systemd-networkd[1393]: cali09fc5e9a16f: Gained IPv6LL Jan 28 01:01:14.132921 containerd[1461]: time="2026-01-28T01:01:14.132761609Z" level=info msg="StopPodSandbox for \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\"" Jan 28 01:01:14.152725 kubelet[2539]: E0128 01:01:14.152011 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df8cd6fbb-9dvqd" podUID="347634cc-6ade-443d-805f-7f8a4ce956c5" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.244 [INFO][4204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.245 [INFO][4204] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" iface="eth0" netns="/var/run/netns/cni-4e9bf732-673e-765c-c5dd-affcf9922dde" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.246 [INFO][4204] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" iface="eth0" netns="/var/run/netns/cni-4e9bf732-673e-765c-c5dd-affcf9922dde" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.247 [INFO][4204] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" iface="eth0" netns="/var/run/netns/cni-4e9bf732-673e-765c-c5dd-affcf9922dde" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.247 [INFO][4204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.247 [INFO][4204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.286 [INFO][4213] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" HandleID="k8s-pod-network.432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.286 [INFO][4213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.286 [INFO][4213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.298 [WARNING][4213] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" HandleID="k8s-pod-network.432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.298 [INFO][4213] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" HandleID="k8s-pod-network.432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.301 [INFO][4213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:14.309508 containerd[1461]: 2026-01-28 01:01:14.304 [INFO][4204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:14.310218 containerd[1461]: time="2026-01-28T01:01:14.310127739Z" level=info msg="TearDown network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\" successfully" Jan 28 01:01:14.310516 containerd[1461]: time="2026-01-28T01:01:14.310257667Z" level=info msg="StopPodSandbox for \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\" returns successfully" Jan 28 01:01:14.311611 kubelet[2539]: E0128 01:01:14.311570 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:14.312779 containerd[1461]: time="2026-01-28T01:01:14.312516950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf6fz,Uid:92332efa-a852-471d-9684-4f885e3f6360,Namespace:kube-system,Attempt:1,}" Jan 28 01:01:14.317003 systemd[1]: run-netns-cni\x2d4e9bf732\x2d673e\x2d765c\x2dc5dd\x2daffcf9922dde.mount: Deactivated successfully. Jan 28 01:01:14.616184 systemd-networkd[1393]: calia7f29c9199b: Link UP Jan 28 01:01:14.616977 systemd-networkd[1393]: calia7f29c9199b: Gained carrier Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.459 [INFO][4221] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0 coredns-668d6bf9bc- kube-system 92332efa-a852-471d-9684-4f885e3f6360 963 0 2026-01-28 01:00:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-vf6fz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia7f29c9199b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vf6fz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vf6fz-" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.459 [INFO][4221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vf6fz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.498 [INFO][4236] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" HandleID="k8s-pod-network.f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.498 [INFO][4236] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" HandleID="k8s-pod-network.f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000134500), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-vf6fz", "timestamp":"2026-01-28 01:01:14.497961058 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.498 [INFO][4236] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.498 [INFO][4236] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.498 [INFO][4236] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.511 [INFO][4236] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" host="localhost" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.520 [INFO][4236] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.527 [INFO][4236] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.564 [INFO][4236] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.573 [INFO][4236] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.573 [INFO][4236] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" host="localhost" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.581 [INFO][4236] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.591 [INFO][4236] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" host="localhost" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.608 [INFO][4236] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" host="localhost" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.608 [INFO][4236] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" host="localhost" Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.608 [INFO][4236] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:14.652574 containerd[1461]: 2026-01-28 01:01:14.608 [INFO][4236] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" HandleID="k8s-pod-network.f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.653304 containerd[1461]: 2026-01-28 01:01:14.611 [INFO][4221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vf6fz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"92332efa-a852-471d-9684-4f885e3f6360", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-vf6fz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7f29c9199b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:14.653304 containerd[1461]: 2026-01-28 01:01:14.611 [INFO][4221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vf6fz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.653304 containerd[1461]: 2026-01-28 01:01:14.611 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7f29c9199b ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vf6fz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.653304 containerd[1461]: 2026-01-28 01:01:14.618 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vf6fz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.653304 containerd[1461]: 2026-01-28 01:01:14.619 [INFO][4221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vf6fz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"92332efa-a852-471d-9684-4f885e3f6360", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f", Pod:"coredns-668d6bf9bc-vf6fz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7f29c9199b", MAC:"3e:4c:2d:72:1a:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:14.653304 containerd[1461]: 2026-01-28 01:01:14.647 [INFO][4221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f" Namespace="kube-system" Pod="coredns-668d6bf9bc-vf6fz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:14.686914 containerd[1461]: time="2026-01-28T01:01:14.686571230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:14.686914 containerd[1461]: time="2026-01-28T01:01:14.686694406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:14.686914 containerd[1461]: time="2026-01-28T01:01:14.686708771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:14.686914 containerd[1461]: time="2026-01-28T01:01:14.686832419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:14.715826 systemd[1]: Started cri-containerd-f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f.scope - libcontainer container f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f. Jan 28 01:01:14.742302 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:01:14.788153 containerd[1461]: time="2026-01-28T01:01:14.788038467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf6fz,Uid:92332efa-a852-471d-9684-4f885e3f6360,Namespace:kube-system,Attempt:1,} returns sandbox id \"f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f\"" Jan 28 01:01:14.789757 kubelet[2539]: E0128 01:01:14.789554 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:14.795114 containerd[1461]: time="2026-01-28T01:01:14.794878201Z" level=info msg="CreateContainer within sandbox \"f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:01:14.831864 containerd[1461]: time="2026-01-28T01:01:14.831096705Z" level=info msg="CreateContainer within sandbox \"f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f00c69ff071d285942e8ff225a937e7f97d09d0360527311cbc12479c8b824a\"" Jan 28 01:01:14.836527 containerd[1461]: time="2026-01-28T01:01:14.833952551Z" level=info msg="StartContainer for \"6f00c69ff071d285942e8ff225a937e7f97d09d0360527311cbc12479c8b824a\"" Jan 28 01:01:14.902625 systemd[1]: Started cri-containerd-6f00c69ff071d285942e8ff225a937e7f97d09d0360527311cbc12479c8b824a.scope - libcontainer container 6f00c69ff071d285942e8ff225a937e7f97d09d0360527311cbc12479c8b824a. Jan 28 01:01:14.972096 containerd[1461]: time="2026-01-28T01:01:14.971941713Z" level=info msg="StartContainer for \"6f00c69ff071d285942e8ff225a937e7f97d09d0360527311cbc12479c8b824a\" returns successfully" Jan 28 01:01:15.009070 containerd[1461]: time="2026-01-28T01:01:15.008950997Z" level=info msg="StopPodSandbox for \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\"" Jan 28 01:01:15.010028 containerd[1461]: time="2026-01-28T01:01:15.009947336Z" level=info msg="StopPodSandbox for \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\"" Jan 28 01:01:15.177106 kubelet[2539]: E0128 01:01:15.176882 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.127 [INFO][4352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.129 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" iface="eth0" netns="/var/run/netns/cni-42fa9506-fe2a-a86a-ffe2-13c801abb504" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.130 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" iface="eth0" netns="/var/run/netns/cni-42fa9506-fe2a-a86a-ffe2-13c801abb504" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.157 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" iface="eth0" netns="/var/run/netns/cni-42fa9506-fe2a-a86a-ffe2-13c801abb504" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.181 [INFO][4352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.181 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.277 [INFO][4371] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" HandleID="k8s-pod-network.79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.278 [INFO][4371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.278 [INFO][4371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.286 [WARNING][4371] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" HandleID="k8s-pod-network.79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.286 [INFO][4371] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" HandleID="k8s-pod-network.79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.288 [INFO][4371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:15.295162 containerd[1461]: 2026-01-28 01:01:15.292 [INFO][4352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:15.296683 containerd[1461]: time="2026-01-28T01:01:15.296316829Z" level=info msg="TearDown network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\" successfully" Jan 28 01:01:15.296683 containerd[1461]: time="2026-01-28T01:01:15.296482885Z" level=info msg="StopPodSandbox for \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\" returns successfully" Jan 28 01:01:15.297566 kubelet[2539]: E0128 01:01:15.297530 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:15.299745 containerd[1461]: time="2026-01-28T01:01:15.299624529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9pm2,Uid:91989808-8381-42d6-9a65-8c96974c0e28,Namespace:kube-system,Attempt:1,}" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.195 [INFO][4353] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.197 [INFO][4353] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" iface="eth0" netns="/var/run/netns/cni-3131f93e-2839-e384-e288-73bc32c39c37" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.198 [INFO][4353] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" iface="eth0" netns="/var/run/netns/cni-3131f93e-2839-e384-e288-73bc32c39c37" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.199 [INFO][4353] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" iface="eth0" netns="/var/run/netns/cni-3131f93e-2839-e384-e288-73bc32c39c37" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.199 [INFO][4353] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.199 [INFO][4353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.290 [INFO][4378] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" HandleID="k8s-pod-network.7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.291 [INFO][4378] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.291 [INFO][4378] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.301 [WARNING][4378] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" HandleID="k8s-pod-network.7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.301 [INFO][4378] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" HandleID="k8s-pod-network.7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.306 [INFO][4378] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:15.314453 containerd[1461]: 2026-01-28 01:01:15.310 [INFO][4353] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:15.315186 containerd[1461]: time="2026-01-28T01:01:15.314878866Z" level=info msg="TearDown network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\" successfully" Jan 28 01:01:15.315186 containerd[1461]: time="2026-01-28T01:01:15.314917196Z" level=info msg="StopPodSandbox for \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\" returns successfully" Jan 28 01:01:15.316869 containerd[1461]: time="2026-01-28T01:01:15.316822916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zzx59,Uid:e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11,Namespace:calico-system,Attempt:1,}" Jan 28 01:01:15.316918 systemd[1]: run-netns-cni\x2d42fa9506\x2dfe2a\x2da86a\x2dffe2\x2d13c801abb504.mount: Deactivated successfully. Jan 28 01:01:15.324189 systemd[1]: run-netns-cni\x2d3131f93e\x2d2839\x2de384\x2de288\x2d73bc32c39c37.mount: Deactivated successfully. Jan 28 01:01:15.616571 systemd-networkd[1393]: cali27df46633e0: Link UP Jan 28 01:01:15.618814 systemd-networkd[1393]: cali27df46633e0: Gained carrier Jan 28 01:01:15.637620 kubelet[2539]: I0128 01:01:15.635695 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vf6fz" podStartSLOduration=50.635595744 podStartE2EDuration="50.635595744s" podCreationTimestamp="2026-01-28 01:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:01:15.203067942 +0000 UTC m=+55.464286986" watchObservedRunningTime="2026-01-28 01:01:15.635595744 +0000 UTC m=+55.896814767" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.490 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zzx59-eth0 csi-node-driver- calico-system e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11 977 0 2026-01-28 01:00:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zzx59 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali27df46633e0 [] [] }} ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Namespace="calico-system" Pod="csi-node-driver-zzx59" WorkloadEndpoint="localhost-k8s-csi--node--driver--zzx59-" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.490 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Namespace="calico-system" Pod="csi-node-driver-zzx59" WorkloadEndpoint="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.545 [INFO][4421] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" HandleID="k8s-pod-network.c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.546 [INFO][4421] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" HandleID="k8s-pod-network.c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zzx59", "timestamp":"2026-01-28 01:01:15.545913363 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.546 [INFO][4421] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.546 [INFO][4421] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.546 [INFO][4421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.563 [INFO][4421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" host="localhost" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.573 [INFO][4421] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.580 [INFO][4421] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.584 [INFO][4421] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.587 [INFO][4421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.588 [INFO][4421] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" host="localhost" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.590 [INFO][4421] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91 Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.597 [INFO][4421] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" host="localhost" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.608 [INFO][4421] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" host="localhost" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.608 [INFO][4421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" host="localhost" Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.608 [INFO][4421] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:15.639470 containerd[1461]: 2026-01-28 01:01:15.608 [INFO][4421] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" HandleID="k8s-pod-network.c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.640336 containerd[1461]: 2026-01-28 01:01:15.611 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Namespace="calico-system" Pod="csi-node-driver-zzx59" WorkloadEndpoint="localhost-k8s-csi--node--driver--zzx59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zzx59-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zzx59", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27df46633e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:15.640336 containerd[1461]: 2026-01-28 01:01:15.612 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Namespace="calico-system" Pod="csi-node-driver-zzx59" WorkloadEndpoint="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.640336 containerd[1461]: 2026-01-28 01:01:15.612 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27df46633e0 ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Namespace="calico-system" Pod="csi-node-driver-zzx59" WorkloadEndpoint="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.640336 containerd[1461]: 2026-01-28 01:01:15.617 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Namespace="calico-system" Pod="csi-node-driver-zzx59" WorkloadEndpoint="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.640336 containerd[1461]: 2026-01-28 01:01:15.618 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Namespace="calico-system" Pod="csi-node-driver-zzx59" WorkloadEndpoint="localhost-k8s-csi--node--driver--zzx59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zzx59-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91", Pod:"csi-node-driver-zzx59", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27df46633e0", MAC:"06:73:54:e9:86:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:15.640336 containerd[1461]: 2026-01-28 01:01:15.633 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91" Namespace="calico-system" Pod="csi-node-driver-zzx59" WorkloadEndpoint="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:15.682787 containerd[1461]: time="2026-01-28T01:01:15.680317079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:15.683100 containerd[1461]: time="2026-01-28T01:01:15.683039308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:15.683215 containerd[1461]: time="2026-01-28T01:01:15.683085672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:15.683763 containerd[1461]: time="2026-01-28T01:01:15.683577806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:15.721740 systemd[1]: Started cri-containerd-c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91.scope - libcontainer container c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91. Jan 28 01:01:15.726455 systemd-networkd[1393]: calia7f29c9199b: Gained IPv6LL Jan 28 01:01:15.754555 systemd-networkd[1393]: cali3583b62577e: Link UP Jan 28 01:01:15.758559 systemd-networkd[1393]: cali3583b62577e: Gained carrier Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.485 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0 coredns-668d6bf9bc- kube-system 91989808-8381-42d6-9a65-8c96974c0e28 974 0 2026-01-28 01:00:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-m9pm2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3583b62577e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9pm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9pm2-" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.485 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9pm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.550 [INFO][4423] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" HandleID="k8s-pod-network.ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.551 [INFO][4423] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" HandleID="k8s-pod-network.ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000519d30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-m9pm2", "timestamp":"2026-01-28 01:01:15.550582503 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.551 [INFO][4423] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.609 [INFO][4423] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.609 [INFO][4423] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.666 [INFO][4423] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" host="localhost" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.677 [INFO][4423] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.691 [INFO][4423] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.694 [INFO][4423] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.700 [INFO][4423] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.700 [INFO][4423] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" host="localhost" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.703 [INFO][4423] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.713 [INFO][4423] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" host="localhost" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.729 [INFO][4423] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" host="localhost" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.730 [INFO][4423] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" host="localhost" Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.730 [INFO][4423] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:15.783733 containerd[1461]: 2026-01-28 01:01:15.730 [INFO][4423] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" HandleID="k8s-pod-network.ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.785084 containerd[1461]: 2026-01-28 01:01:15.748 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9pm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91989808-8381-42d6-9a65-8c96974c0e28", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-m9pm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3583b62577e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:15.785084 containerd[1461]: 2026-01-28 01:01:15.748 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9pm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.785084 containerd[1461]: 2026-01-28 01:01:15.748 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3583b62577e ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9pm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.785084 containerd[1461]: 2026-01-28 01:01:15.759 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9pm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.785084 containerd[1461]: 2026-01-28 01:01:15.761 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9pm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91989808-8381-42d6-9a65-8c96974c0e28", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b", Pod:"coredns-668d6bf9bc-m9pm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3583b62577e", MAC:"8a:f2:d9:51:62:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:15.785084 containerd[1461]: 2026-01-28 01:01:15.778 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b" Namespace="kube-system" Pod="coredns-668d6bf9bc-m9pm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:15.788276 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:01:15.810614 containerd[1461]: time="2026-01-28T01:01:15.810466054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zzx59,Uid:e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11,Namespace:calico-system,Attempt:1,} returns sandbox id \"c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91\"" Jan 28 01:01:15.817499 containerd[1461]: time="2026-01-28T01:01:15.816209991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:01:15.827975 containerd[1461]: time="2026-01-28T01:01:15.827071092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:15.827975 containerd[1461]: time="2026-01-28T01:01:15.827267643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:15.827975 containerd[1461]: time="2026-01-28T01:01:15.827292610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:15.827975 containerd[1461]: time="2026-01-28T01:01:15.827719273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:15.874551 systemd[1]: Started cri-containerd-ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b.scope - libcontainer container ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b. Jan 28 01:01:15.892335 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:01:15.898028 containerd[1461]: time="2026-01-28T01:01:15.897903398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:15.899937 containerd[1461]: time="2026-01-28T01:01:15.899756903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:01:15.899937 containerd[1461]: time="2026-01-28T01:01:15.899903091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:01:15.900401 kubelet[2539]: E0128 01:01:15.900250 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:15.900401 kubelet[2539]: E0128 01:01:15.900324 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:15.900909 kubelet[2539]: E0128 01:01:15.900788 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zjlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zzx59_calico-system(e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:15.905172 containerd[1461]: time="2026-01-28T01:01:15.904942366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:01:15.946985 containerd[1461]: time="2026-01-28T01:01:15.946932612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9pm2,Uid:91989808-8381-42d6-9a65-8c96974c0e28,Namespace:kube-system,Attempt:1,} returns sandbox id \"ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b\"" Jan 28 01:01:15.948927 kubelet[2539]: E0128 01:01:15.948763 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:15.952335 containerd[1461]: time="2026-01-28T01:01:15.952254962Z" level=info msg="CreateContainer within sandbox \"ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:01:15.972996 containerd[1461]: time="2026-01-28T01:01:15.972824901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:15.978799 containerd[1461]: time="2026-01-28T01:01:15.978674685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:01:15.979006 containerd[1461]: time="2026-01-28T01:01:15.978717426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:01:15.979442 kubelet[2539]: E0128 01:01:15.979227 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:15.979559 kubelet[2539]: E0128 01:01:15.979441 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:15.980103 containerd[1461]: time="2026-01-28T01:01:15.979923627Z" level=info msg="CreateContainer within sandbox \"ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"692dbd22d89a055f04f06788e5a30bbb0ae23030d4f164d8ef84255c85f898f2\"" Jan 28 01:01:15.981115 containerd[1461]: time="2026-01-28T01:01:15.980992282Z" level=info msg="StartContainer for \"692dbd22d89a055f04f06788e5a30bbb0ae23030d4f164d8ef84255c85f898f2\"" Jan 28 01:01:15.981315 kubelet[2539]: E0128 01:01:15.981266 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zjlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zzx59_calico-system(e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:15.983581 kubelet[2539]: E0128 01:01:15.983488 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:01:16.010305 containerd[1461]: time="2026-01-28T01:01:16.010107785Z" level=info msg="StopPodSandbox for \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\"" Jan 28 01:01:16.030848 systemd[1]: Started cri-containerd-692dbd22d89a055f04f06788e5a30bbb0ae23030d4f164d8ef84255c85f898f2.scope - libcontainer container 692dbd22d89a055f04f06788e5a30bbb0ae23030d4f164d8ef84255c85f898f2. Jan 28 01:01:16.079589 containerd[1461]: time="2026-01-28T01:01:16.079488164Z" level=info msg="StartContainer for \"692dbd22d89a055f04f06788e5a30bbb0ae23030d4f164d8ef84255c85f898f2\" returns successfully" Jan 28 01:01:16.185218 kubelet[2539]: E0128 01:01:16.184598 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.122 [INFO][4565] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.123 [INFO][4565] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" iface="eth0" netns="/var/run/netns/cni-266e3480-b7f8-81e6-38d3-703819cb6b45" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.123 [INFO][4565] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" iface="eth0" netns="/var/run/netns/cni-266e3480-b7f8-81e6-38d3-703819cb6b45" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.124 [INFO][4565] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" iface="eth0" netns="/var/run/netns/cni-266e3480-b7f8-81e6-38d3-703819cb6b45" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.124 [INFO][4565] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.124 [INFO][4565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.165 [INFO][4595] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" HandleID="k8s-pod-network.23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.165 [INFO][4595] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.165 [INFO][4595] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.174 [WARNING][4595] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" HandleID="k8s-pod-network.23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.174 [INFO][4595] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" HandleID="k8s-pod-network.23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.178 [INFO][4595] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:16.188770 containerd[1461]: 2026-01-28 01:01:16.182 [INFO][4565] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:16.190036 containerd[1461]: time="2026-01-28T01:01:16.189767202Z" level=info msg="TearDown network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\" successfully" Jan 28 01:01:16.190036 containerd[1461]: time="2026-01-28T01:01:16.189794161Z" level=info msg="StopPodSandbox for \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\" returns successfully" Jan 28 01:01:16.192100 containerd[1461]: time="2026-01-28T01:01:16.191605350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-776bf76bd-h6kxm,Uid:5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f,Namespace:calico-system,Attempt:1,}" Jan 28 01:01:16.193450 kubelet[2539]: E0128 01:01:16.193004 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:16.197034 kubelet[2539]: E0128 01:01:16.196841 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:01:16.204904 kubelet[2539]: I0128 01:01:16.204742 2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m9pm2" podStartSLOduration=51.204724963 podStartE2EDuration="51.204724963s" podCreationTimestamp="2026-01-28 01:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:01:16.204701395 +0000 UTC m=+56.465920408" watchObservedRunningTime="2026-01-28 01:01:16.204724963 +0000 UTC m=+56.465943976" Jan 28 01:01:16.325339 systemd[1]: run-netns-cni\x2d266e3480\x2db7f8\x2d81e6\x2d38d3\x2d703819cb6b45.mount: Deactivated successfully. Jan 28 01:01:16.396485 systemd-networkd[1393]: cali250390b3d94: Link UP Jan 28 01:01:16.400454 systemd-networkd[1393]: cali250390b3d94: Gained carrier Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.281 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0 calico-kube-controllers-776bf76bd- calico-system 5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f 999 0 2026-01-28 01:00:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:776bf76bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-776bf76bd-h6kxm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali250390b3d94 [] [] }} ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Namespace="calico-system" Pod="calico-kube-controllers-776bf76bd-h6kxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.281 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Namespace="calico-system" Pod="calico-kube-controllers-776bf76bd-h6kxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.317 [INFO][4622] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" HandleID="k8s-pod-network.5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.318 [INFO][4622] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" HandleID="k8s-pod-network.5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135410), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-776bf76bd-h6kxm", "timestamp":"2026-01-28 01:01:16.317197491 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.318 [INFO][4622] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.318 [INFO][4622] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.318 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.326 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" host="localhost" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.342 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.353 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.357 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.361 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.361 [INFO][4622] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" host="localhost" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.365 [INFO][4622] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.374 [INFO][4622] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" host="localhost" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.384 [INFO][4622] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" host="localhost" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.384 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" host="localhost" Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.384 [INFO][4622] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:16.417236 containerd[1461]: 2026-01-28 01:01:16.384 [INFO][4622] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" HandleID="k8s-pod-network.5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.418966 containerd[1461]: 2026-01-28 01:01:16.389 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Namespace="calico-system" Pod="calico-kube-controllers-776bf76bd-h6kxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0", GenerateName:"calico-kube-controllers-776bf76bd-", Namespace:"calico-system", SelfLink:"", UID:"5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"776bf76bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-776bf76bd-h6kxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali250390b3d94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:16.418966 containerd[1461]: 2026-01-28 01:01:16.389 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Namespace="calico-system" Pod="calico-kube-controllers-776bf76bd-h6kxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.418966 containerd[1461]: 2026-01-28 01:01:16.389 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali250390b3d94 ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Namespace="calico-system" Pod="calico-kube-controllers-776bf76bd-h6kxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.418966 containerd[1461]: 2026-01-28 01:01:16.397 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Namespace="calico-system" Pod="calico-kube-controllers-776bf76bd-h6kxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.418966 containerd[1461]: 2026-01-28 01:01:16.398 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Namespace="calico-system" Pod="calico-kube-controllers-776bf76bd-h6kxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0", GenerateName:"calico-kube-controllers-776bf76bd-", Namespace:"calico-system", SelfLink:"", UID:"5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"776bf76bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a", Pod:"calico-kube-controllers-776bf76bd-h6kxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali250390b3d94", MAC:"16:96:e5:d4:f7:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:16.418966 containerd[1461]: 2026-01-28 01:01:16.412 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a" Namespace="calico-system" Pod="calico-kube-controllers-776bf76bd-h6kxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:16.462506 containerd[1461]: time="2026-01-28T01:01:16.462185572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:16.464438 containerd[1461]: time="2026-01-28T01:01:16.462719445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:16.464438 containerd[1461]: time="2026-01-28T01:01:16.462847610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:16.464438 containerd[1461]: time="2026-01-28T01:01:16.463818164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:16.509659 systemd[1]: Started cri-containerd-5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a.scope - libcontainer container 5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a. Jan 28 01:01:16.528095 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:01:16.570778 containerd[1461]: time="2026-01-28T01:01:16.570721941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-776bf76bd-h6kxm,Uid:5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f,Namespace:calico-system,Attempt:1,} returns sandbox id \"5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a\"" Jan 28 01:01:16.573931 containerd[1461]: time="2026-01-28T01:01:16.573831417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:01:16.643304 containerd[1461]: time="2026-01-28T01:01:16.643186969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:16.645521 containerd[1461]: time="2026-01-28T01:01:16.645280171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:01:16.645521 containerd[1461]: time="2026-01-28T01:01:16.645444763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:16.645814 kubelet[2539]: E0128 01:01:16.645749 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:16.645908 kubelet[2539]: E0128 01:01:16.645813 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:16.646114 kubelet[2539]: E0128 01:01:16.646005 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xz5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-776bf76bd-h6kxm_calico-system(5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:16.647846 kubelet[2539]: E0128 01:01:16.647504 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:01:17.198923 kubelet[2539]: E0128 01:01:17.198796 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:17.200708 kubelet[2539]: E0128 01:01:17.200335 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:17.202536 kubelet[2539]: E0128 01:01:17.202495 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:01:17.203036 kubelet[2539]: E0128 01:01:17.202908 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:01:17.325883 systemd-networkd[1393]: cali3583b62577e: Gained IPv6LL Jan 28 01:01:17.580708 systemd-networkd[1393]: cali27df46633e0: Gained IPv6LL Jan 28 01:01:18.009528 containerd[1461]: time="2026-01-28T01:01:18.008739314Z" level=info msg="StopPodSandbox for \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\"" Jan 28 01:01:18.009528 containerd[1461]: time="2026-01-28T01:01:18.009509057Z" level=info msg="StopPodSandbox for \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\"" Jan 28 01:01:18.093780 systemd-networkd[1393]: cali250390b3d94: Gained IPv6LL Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.091 [INFO][4710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.092 [INFO][4710] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" iface="eth0" netns="/var/run/netns/cni-57f85181-3b47-b31a-9534-ca518cff752a" Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.094 [INFO][4710] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" iface="eth0" netns="/var/run/netns/cni-57f85181-3b47-b31a-9534-ca518cff752a" Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.095 [INFO][4710] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" iface="eth0" netns="/var/run/netns/cni-57f85181-3b47-b31a-9534-ca518cff752a" Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.096 [INFO][4710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.096 [INFO][4710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.165 [INFO][4727] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" HandleID="k8s-pod-network.1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.165 [INFO][4727] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.165 [INFO][4727] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.175 [WARNING][4727] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" HandleID="k8s-pod-network.1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.175 [INFO][4727] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" HandleID="k8s-pod-network.1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.179 [INFO][4727] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:18.184996 containerd[1461]: 2026-01-28 01:01:18.182 [INFO][4710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:18.186209 containerd[1461]: time="2026-01-28T01:01:18.186058961Z" level=info msg="TearDown network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\" successfully" Jan 28 01:01:18.186209 containerd[1461]: time="2026-01-28T01:01:18.186126535Z" level=info msg="StopPodSandbox for \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\" returns successfully" Jan 28 01:01:18.187282 containerd[1461]: time="2026-01-28T01:01:18.187223587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nrtch,Uid:1437c929-66a0-4403-bd2f-71d8e8195954,Namespace:calico-system,Attempt:1,}" Jan 28 01:01:18.188843 systemd[1]: run-netns-cni\x2d57f85181\x2d3b47\x2db31a\x2d9534\x2dca518cff752a.mount: Deactivated successfully. Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.090 [INFO][4711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.090 [INFO][4711] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" iface="eth0" netns="/var/run/netns/cni-24119ab2-144a-8fe1-2c5a-a0821b953d49" Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.092 [INFO][4711] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" iface="eth0" netns="/var/run/netns/cni-24119ab2-144a-8fe1-2c5a-a0821b953d49" Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.094 [INFO][4711] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" iface="eth0" netns="/var/run/netns/cni-24119ab2-144a-8fe1-2c5a-a0821b953d49" Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.094 [INFO][4711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.094 [INFO][4711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.176 [INFO][4725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" HandleID="k8s-pod-network.097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.177 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.179 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.189 [WARNING][4725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" HandleID="k8s-pod-network.097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.189 [INFO][4725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" HandleID="k8s-pod-network.097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.193 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:18.199757 containerd[1461]: 2026-01-28 01:01:18.197 [INFO][4711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:18.202148 containerd[1461]: time="2026-01-28T01:01:18.200906110Z" level=info msg="TearDown network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\" successfully" Jan 28 01:01:18.202148 containerd[1461]: time="2026-01-28T01:01:18.200969076Z" level=info msg="StopPodSandbox for \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\" returns successfully" Jan 28 01:01:18.202148 containerd[1461]: time="2026-01-28T01:01:18.201881016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4fbc6c9f-jjn2k,Uid:e05352ea-7146-4137-a0ba-4a0cd04f63ba,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:01:18.203286 kubelet[2539]: E0128 01:01:18.203254 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:18.204619 kubelet[2539]: E0128 01:01:18.204300 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:18.205791 kubelet[2539]: E0128 01:01:18.205768 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:01:18.211289 systemd[1]: run-netns-cni\x2d24119ab2\x2d144a\x2d8fe1\x2d2c5a\x2da0821b953d49.mount: Deactivated successfully. Jan 28 01:01:18.399467 systemd-networkd[1393]: cali0f6fa8b9363: Link UP Jan 28 01:01:18.401961 systemd-networkd[1393]: cali0f6fa8b9363: Gained carrier Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.293 [INFO][4753] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0 calico-apiserver-6c4fbc6c9f- calico-apiserver e05352ea-7146-4137-a0ba-4a0cd04f63ba 1041 0 2026-01-28 01:00:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c4fbc6c9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c4fbc6c9f-jjn2k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f6fa8b9363 [] [] }} ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-jjn2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.293 [INFO][4753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-jjn2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.325 [INFO][4768] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" HandleID="k8s-pod-network.3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.325 [INFO][4768] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" HandleID="k8s-pod-network.3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c4fbc6c9f-jjn2k", "timestamp":"2026-01-28 01:01:18.325259587 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.325 [INFO][4768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.325 [INFO][4768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.325 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.355 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" host="localhost" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.364 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.369 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.373 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.376 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.376 [INFO][4768] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" host="localhost" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.379 [INFO][4768] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.384 [INFO][4768] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" host="localhost" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.392 [INFO][4768] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" host="localhost" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.392 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" host="localhost" Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.392 [INFO][4768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:18.437129 containerd[1461]: 2026-01-28 01:01:18.393 [INFO][4768] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" HandleID="k8s-pod-network.3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.438329 containerd[1461]: 2026-01-28 01:01:18.395 [INFO][4753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-jjn2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0", GenerateName:"calico-apiserver-6c4fbc6c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e05352ea-7146-4137-a0ba-4a0cd04f63ba", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4fbc6c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c4fbc6c9f-jjn2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f6fa8b9363", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:18.438329 containerd[1461]: 2026-01-28 01:01:18.396 [INFO][4753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-jjn2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.438329 containerd[1461]: 2026-01-28 01:01:18.396 [INFO][4753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f6fa8b9363 ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-jjn2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.438329 containerd[1461]: 2026-01-28 01:01:18.404 [INFO][4753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-jjn2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.438329 containerd[1461]: 2026-01-28 01:01:18.407 [INFO][4753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-jjn2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0", GenerateName:"calico-apiserver-6c4fbc6c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e05352ea-7146-4137-a0ba-4a0cd04f63ba", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4fbc6c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba", Pod:"calico-apiserver-6c4fbc6c9f-jjn2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f6fa8b9363", MAC:"0e:37:8d:2e:7e:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:18.438329 containerd[1461]: 2026-01-28 01:01:18.424 [INFO][4753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-jjn2k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:18.509243 containerd[1461]: time="2026-01-28T01:01:18.508483918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:18.509243 containerd[1461]: time="2026-01-28T01:01:18.508605071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:18.509243 containerd[1461]: time="2026-01-28T01:01:18.508621011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:18.509243 containerd[1461]: time="2026-01-28T01:01:18.508739058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:18.527833 systemd-networkd[1393]: calic7d1a23246d: Link UP Jan 28 01:01:18.528279 systemd-networkd[1393]: calic7d1a23246d: Gained carrier Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.293 [INFO][4741] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--nrtch-eth0 goldmane-666569f655- calico-system 1437c929-66a0-4403-bd2f-71d8e8195954 1042 0 2026-01-28 01:00:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-nrtch eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic7d1a23246d [] [] }} ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Namespace="calico-system" Pod="goldmane-666569f655-nrtch" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrtch-" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.293 [INFO][4741] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Namespace="calico-system" Pod="goldmane-666569f655-nrtch" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.351 [INFO][4770] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" HandleID="k8s-pod-network.9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.352 [INFO][4770] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" HandleID="k8s-pod-network.9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-nrtch", "timestamp":"2026-01-28 01:01:18.351990782 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.352 [INFO][4770] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.393 [INFO][4770] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.393 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.465 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" host="localhost" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.480 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.490 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.494 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.498 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.498 [INFO][4770] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" host="localhost" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.500 [INFO][4770] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.508 [INFO][4770] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" host="localhost" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.516 [INFO][4770] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" host="localhost" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.516 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" host="localhost" Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.516 [INFO][4770] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:18.553540 containerd[1461]: 2026-01-28 01:01:18.516 [INFO][4770] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" HandleID="k8s-pod-network.9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.554107 containerd[1461]: 2026-01-28 01:01:18.522 [INFO][4741] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Namespace="calico-system" Pod="goldmane-666569f655-nrtch" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrtch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nrtch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1437c929-66a0-4403-bd2f-71d8e8195954", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-nrtch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7d1a23246d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:18.554107 containerd[1461]: 2026-01-28 01:01:18.522 [INFO][4741] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Namespace="calico-system" Pod="goldmane-666569f655-nrtch" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.554107 containerd[1461]: 2026-01-28 01:01:18.522 [INFO][4741] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7d1a23246d ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Namespace="calico-system" Pod="goldmane-666569f655-nrtch" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.554107 containerd[1461]: 2026-01-28 01:01:18.530 [INFO][4741] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Namespace="calico-system" Pod="goldmane-666569f655-nrtch" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.554107 containerd[1461]: 2026-01-28 01:01:18.532 [INFO][4741] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Namespace="calico-system" Pod="goldmane-666569f655-nrtch" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrtch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nrtch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1437c929-66a0-4403-bd2f-71d8e8195954", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba", Pod:"goldmane-666569f655-nrtch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7d1a23246d", MAC:"22:e7:45:bb:70:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:18.554107 containerd[1461]: 2026-01-28 01:01:18.545 [INFO][4741] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba" Namespace="calico-system" Pod="goldmane-666569f655-nrtch" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:18.560947 systemd[1]: Started cri-containerd-3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba.scope - libcontainer container 3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba. Jan 28 01:01:18.581956 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:01:18.594210 containerd[1461]: time="2026-01-28T01:01:18.593591043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:18.594210 containerd[1461]: time="2026-01-28T01:01:18.593654359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:18.594210 containerd[1461]: time="2026-01-28T01:01:18.593774210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:18.594210 containerd[1461]: time="2026-01-28T01:01:18.593899611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:18.617707 systemd[1]: Started cri-containerd-9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba.scope - libcontainer container 9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba. Jan 28 01:01:18.619134 containerd[1461]: time="2026-01-28T01:01:18.619102829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4fbc6c9f-jjn2k,Uid:e05352ea-7146-4137-a0ba-4a0cd04f63ba,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba\"" Jan 28 01:01:18.621311 containerd[1461]: time="2026-01-28T01:01:18.621101781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:01:18.637525 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:01:18.672285 containerd[1461]: time="2026-01-28T01:01:18.672141009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nrtch,Uid:1437c929-66a0-4403-bd2f-71d8e8195954,Namespace:calico-system,Attempt:1,} returns sandbox id \"9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba\"" Jan 28 01:01:18.695590 containerd[1461]: time="2026-01-28T01:01:18.695443899Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:18.697289 containerd[1461]: time="2026-01-28T01:01:18.697198642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:01:18.697615 containerd[1461]: time="2026-01-28T01:01:18.697436199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:18.698145 kubelet[2539]: E0128 01:01:18.697968 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:18.698145 kubelet[2539]: E0128 01:01:18.698068 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:18.698732 kubelet[2539]: E0128 01:01:18.698640 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdkl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c4fbc6c9f-jjn2k_calico-apiserver(e05352ea-7146-4137-a0ba-4a0cd04f63ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:18.699086 containerd[1461]: time="2026-01-28T01:01:18.698960523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:01:18.700488 kubelet[2539]: E0128 01:01:18.700303 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:01:18.759790 containerd[1461]: time="2026-01-28T01:01:18.759631908Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:18.762134 containerd[1461]: time="2026-01-28T01:01:18.761452916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:01:18.762134 containerd[1461]: time="2026-01-28T01:01:18.761611969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:18.762262 kubelet[2539]: E0128 01:01:18.761956 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:18.762262 kubelet[2539]: E0128 01:01:18.762014 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:18.762262 kubelet[2539]: E0128 01:01:18.762196 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdzfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nrtch_calico-system(1437c929-66a0-4403-bd2f-71d8e8195954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:18.764124 kubelet[2539]: E0128 01:01:18.763954 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:01:19.008901 containerd[1461]: time="2026-01-28T01:01:19.008332888Z" level=info msg="StopPodSandbox for \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\"" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.062 [INFO][4897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.063 [INFO][4897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" iface="eth0" netns="/var/run/netns/cni-c9e7aead-0574-8545-6f11-8f9fd2f09531" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.063 [INFO][4897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" iface="eth0" netns="/var/run/netns/cni-c9e7aead-0574-8545-6f11-8f9fd2f09531" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.063 [INFO][4897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" iface="eth0" netns="/var/run/netns/cni-c9e7aead-0574-8545-6f11-8f9fd2f09531" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.063 [INFO][4897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.063 [INFO][4897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.093 [INFO][4906] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" HandleID="k8s-pod-network.eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.094 [INFO][4906] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.094 [INFO][4906] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.101 [WARNING][4906] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" HandleID="k8s-pod-network.eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.101 [INFO][4906] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" HandleID="k8s-pod-network.eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.104 [INFO][4906] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:19.109124 containerd[1461]: 2026-01-28 01:01:19.106 [INFO][4897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:19.109918 containerd[1461]: time="2026-01-28T01:01:19.109455231Z" level=info msg="TearDown network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\" successfully" Jan 28 01:01:19.109918 containerd[1461]: time="2026-01-28T01:01:19.109485326Z" level=info msg="StopPodSandbox for \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\" returns successfully" Jan 28 01:01:19.110768 containerd[1461]: time="2026-01-28T01:01:19.110710255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4fbc6c9f-7hscs,Uid:1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:01:19.191614 systemd[1]: run-netns-cni\x2dc9e7aead\x2d0574\x2d8545\x2d6f11\x2d8f9fd2f09531.mount: Deactivated successfully. Jan 28 01:01:19.209159 kubelet[2539]: E0128 01:01:19.209094 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:01:19.215634 kubelet[2539]: E0128 01:01:19.215409 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:01:19.263422 systemd-networkd[1393]: cali4595b9caba4: Link UP Jan 28 01:01:19.263778 systemd-networkd[1393]: cali4595b9caba4: Gained carrier Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.163 [INFO][4914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0 calico-apiserver-6c4fbc6c9f- calico-apiserver 1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7 1066 0 2026-01-28 01:00:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c4fbc6c9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c4fbc6c9f-7hscs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4595b9caba4 [] [] }} ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-7hscs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.163 [INFO][4914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-7hscs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.198 [INFO][4929] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" HandleID="k8s-pod-network.dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.199 [INFO][4929] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" HandleID="k8s-pod-network.dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c4fbc6c9f-7hscs", "timestamp":"2026-01-28 01:01:19.198874157 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.199 [INFO][4929] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.199 [INFO][4929] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.199 [INFO][4929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.209 [INFO][4929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" host="localhost" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.218 [INFO][4929] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.228 [INFO][4929] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.230 [INFO][4929] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.236 [INFO][4929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.236 [INFO][4929] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" host="localhost" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.242 [INFO][4929] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.246 [INFO][4929] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" host="localhost" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.253 [INFO][4929] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" host="localhost" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.254 [INFO][4929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" host="localhost" Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.254 [INFO][4929] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:19.281207 containerd[1461]: 2026-01-28 01:01:19.254 [INFO][4929] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" HandleID="k8s-pod-network.dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.282686 containerd[1461]: 2026-01-28 01:01:19.258 [INFO][4914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-7hscs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0", GenerateName:"calico-apiserver-6c4fbc6c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4fbc6c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c4fbc6c9f-7hscs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4595b9caba4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:19.282686 containerd[1461]: 2026-01-28 01:01:19.258 [INFO][4914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-7hscs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.282686 containerd[1461]: 2026-01-28 01:01:19.259 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4595b9caba4 ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-7hscs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.282686 containerd[1461]: 2026-01-28 01:01:19.263 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-7hscs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.282686 containerd[1461]: 2026-01-28 01:01:19.263 [INFO][4914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-7hscs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0", GenerateName:"calico-apiserver-6c4fbc6c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4fbc6c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a", Pod:"calico-apiserver-6c4fbc6c9f-7hscs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4595b9caba4", MAC:"e2:ed:0c:6d:76:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:19.282686 containerd[1461]: 2026-01-28 01:01:19.275 [INFO][4914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a" Namespace="calico-apiserver" Pod="calico-apiserver-6c4fbc6c9f-7hscs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:19.309137 containerd[1461]: time="2026-01-28T01:01:19.308832702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:19.309137 containerd[1461]: time="2026-01-28T01:01:19.309036988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:19.309137 containerd[1461]: time="2026-01-28T01:01:19.309048841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:19.310523 containerd[1461]: time="2026-01-28T01:01:19.309281390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:19.348750 systemd[1]: Started cri-containerd-dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a.scope - libcontainer container dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a. Jan 28 01:01:19.364483 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:01:19.398687 containerd[1461]: time="2026-01-28T01:01:19.398593536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c4fbc6c9f-7hscs,Uid:1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a\"" Jan 28 01:01:19.401118 containerd[1461]: time="2026-01-28T01:01:19.401065896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:01:19.468395 containerd[1461]: time="2026-01-28T01:01:19.468307054Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:19.469950 containerd[1461]: time="2026-01-28T01:01:19.469832465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:01:19.469950 containerd[1461]: time="2026-01-28T01:01:19.469866846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:19.470394 kubelet[2539]: E0128 01:01:19.470282 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:19.470491 kubelet[2539]: E0128 01:01:19.470438 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:19.470767 kubelet[2539]: E0128 01:01:19.470686 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zklb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c4fbc6c9f-7hscs_calico-apiserver(1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:19.472975 kubelet[2539]: E0128 01:01:19.472900 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:01:19.628645 systemd-networkd[1393]: cali0f6fa8b9363: Gained IPv6LL Jan 28 01:01:20.001946 containerd[1461]: time="2026-01-28T01:01:20.001759475Z" level=info msg="StopPodSandbox for \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\"" Jan 28 01:01:20.013677 systemd-networkd[1393]: calic7d1a23246d: Gained IPv6LL Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.055 [WARNING][5002] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0", GenerateName:"calico-kube-controllers-776bf76bd-", Namespace:"calico-system", SelfLink:"", UID:"5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"776bf76bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a", Pod:"calico-kube-controllers-776bf76bd-h6kxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali250390b3d94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.056 [INFO][5002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.056 [INFO][5002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" iface="eth0" netns="" Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.056 [INFO][5002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.056 [INFO][5002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.078 [INFO][5013] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" HandleID="k8s-pod-network.23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.078 [INFO][5013] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.078 [INFO][5013] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.086 [WARNING][5013] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" HandleID="k8s-pod-network.23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.086 [INFO][5013] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" HandleID="k8s-pod-network.23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.088 [INFO][5013] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:20.093425 containerd[1461]: 2026-01-28 01:01:20.090 [INFO][5002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:20.093952 containerd[1461]: time="2026-01-28T01:01:20.093479341Z" level=info msg="TearDown network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\" successfully" Jan 28 01:01:20.093952 containerd[1461]: time="2026-01-28T01:01:20.093555140Z" level=info msg="StopPodSandbox for \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\" returns successfully" Jan 28 01:01:20.101182 containerd[1461]: time="2026-01-28T01:01:20.101066741Z" level=info msg="RemovePodSandbox for \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\"" Jan 28 01:01:20.102898 containerd[1461]: time="2026-01-28T01:01:20.102859951Z" level=info msg="Forcibly stopping sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\"" Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.152 [WARNING][5030] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0", GenerateName:"calico-kube-controllers-776bf76bd-", Namespace:"calico-system", SelfLink:"", UID:"5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"776bf76bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5df3ccdffb4d6fd266848136e703fe1ee8b7a6f0436af5f061a9b38cc175ce5a", Pod:"calico-kube-controllers-776bf76bd-h6kxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali250390b3d94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.152 [INFO][5030] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.152 [INFO][5030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" iface="eth0" netns="" Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.152 [INFO][5030] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.152 [INFO][5030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.182 [INFO][5038] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" HandleID="k8s-pod-network.23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.183 [INFO][5038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.183 [INFO][5038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.190 [WARNING][5038] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" HandleID="k8s-pod-network.23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.190 [INFO][5038] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" HandleID="k8s-pod-network.23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Workload="localhost-k8s-calico--kube--controllers--776bf76bd--h6kxm-eth0" Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.193 [INFO][5038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:20.197958 containerd[1461]: 2026-01-28 01:01:20.195 [INFO][5030] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb" Jan 28 01:01:20.198812 containerd[1461]: time="2026-01-28T01:01:20.198005083Z" level=info msg="TearDown network for sandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\" successfully" Jan 28 01:01:20.208193 containerd[1461]: time="2026-01-28T01:01:20.208106987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:20.208290 containerd[1461]: time="2026-01-28T01:01:20.208205779Z" level=info msg="RemovePodSandbox \"23175a6584c258e9465c44169e422d3a9657b6c79335e9d06a2b024995c47fdb\" returns successfully" Jan 28 01:01:20.209099 containerd[1461]: time="2026-01-28T01:01:20.209062414Z" level=info msg="StopPodSandbox for \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\"" Jan 28 01:01:20.219629 kubelet[2539]: E0128 01:01:20.219571 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:01:20.220127 kubelet[2539]: E0128 01:01:20.220008 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:01:20.220337 kubelet[2539]: E0128 01:01:20.220243 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.272 [WARNING][5055] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nrtch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1437c929-66a0-4403-bd2f-71d8e8195954", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba", Pod:"goldmane-666569f655-nrtch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7d1a23246d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.272 [INFO][5055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.272 [INFO][5055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" iface="eth0" netns="" Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.272 [INFO][5055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.272 [INFO][5055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.313 [INFO][5063] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" HandleID="k8s-pod-network.1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.313 [INFO][5063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.313 [INFO][5063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.321 [WARNING][5063] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" HandleID="k8s-pod-network.1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.321 [INFO][5063] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" HandleID="k8s-pod-network.1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.323 [INFO][5063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:20.330216 containerd[1461]: 2026-01-28 01:01:20.326 [INFO][5055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:20.331044 containerd[1461]: time="2026-01-28T01:01:20.330236043Z" level=info msg="TearDown network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\" successfully" Jan 28 01:01:20.331044 containerd[1461]: time="2026-01-28T01:01:20.330495142Z" level=info msg="StopPodSandbox for \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\" returns successfully" Jan 28 01:01:20.331939 containerd[1461]: time="2026-01-28T01:01:20.331861374Z" level=info msg="RemovePodSandbox for \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\"" Jan 28 01:01:20.331939 containerd[1461]: time="2026-01-28T01:01:20.331933968Z" level=info msg="Forcibly stopping sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\"" Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.372 [WARNING][5082] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nrtch-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1437c929-66a0-4403-bd2f-71d8e8195954", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e17f7e1a1c4f123e50e0bfa1a615578c5b3904afeb838e429bca46c63db8fba", Pod:"goldmane-666569f655-nrtch", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7d1a23246d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.372 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.372 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" iface="eth0" netns="" Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.372 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.372 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.399 [INFO][5090] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" HandleID="k8s-pod-network.1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.399 [INFO][5090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.399 [INFO][5090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.407 [WARNING][5090] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" HandleID="k8s-pod-network.1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.407 [INFO][5090] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" HandleID="k8s-pod-network.1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Workload="localhost-k8s-goldmane--666569f655--nrtch-eth0" Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.409 [INFO][5090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:20.415868 containerd[1461]: 2026-01-28 01:01:20.412 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449" Jan 28 01:01:20.416287 containerd[1461]: time="2026-01-28T01:01:20.415890460Z" level=info msg="TearDown network for sandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\" successfully" Jan 28 01:01:20.421915 containerd[1461]: time="2026-01-28T01:01:20.421810640Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:20.421915 containerd[1461]: time="2026-01-28T01:01:20.421925332Z" level=info msg="RemovePodSandbox \"1506949bd2f68a30368304cbc7dd5d17e5a1bbf51c1b929feb79e26c7e7ea449\" returns successfully" Jan 28 01:01:20.422656 containerd[1461]: time="2026-01-28T01:01:20.422587998Z" level=info msg="StopPodSandbox for \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\"" Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.471 [WARNING][5107] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0", GenerateName:"calico-apiserver-6c4fbc6c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e05352ea-7146-4137-a0ba-4a0cd04f63ba", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4fbc6c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba", Pod:"calico-apiserver-6c4fbc6c9f-jjn2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f6fa8b9363", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.471 [INFO][5107] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.472 [INFO][5107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" iface="eth0" netns="" Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.472 [INFO][5107] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.472 [INFO][5107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.498 [INFO][5116] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" HandleID="k8s-pod-network.097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.498 [INFO][5116] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.498 [INFO][5116] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.508 [WARNING][5116] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" HandleID="k8s-pod-network.097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.508 [INFO][5116] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" HandleID="k8s-pod-network.097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.510 [INFO][5116] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:20.515077 containerd[1461]: 2026-01-28 01:01:20.512 [INFO][5107] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:20.515563 containerd[1461]: time="2026-01-28T01:01:20.515123519Z" level=info msg="TearDown network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\" successfully" Jan 28 01:01:20.515563 containerd[1461]: time="2026-01-28T01:01:20.515161720Z" level=info msg="StopPodSandbox for \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\" returns successfully" Jan 28 01:01:20.516112 containerd[1461]: time="2026-01-28T01:01:20.516035523Z" level=info msg="RemovePodSandbox for \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\"" Jan 28 01:01:20.516112 containerd[1461]: time="2026-01-28T01:01:20.516091166Z" level=info msg="Forcibly stopping sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\"" Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.556 [WARNING][5135] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0", GenerateName:"calico-apiserver-6c4fbc6c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e05352ea-7146-4137-a0ba-4a0cd04f63ba", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4fbc6c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d1370d173755023ee8bc2abb1cb0cafbbc5c04e1883064466f869508cbc86ba", Pod:"calico-apiserver-6c4fbc6c9f-jjn2k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f6fa8b9363", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.556 [INFO][5135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.556 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" iface="eth0" netns="" Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.556 [INFO][5135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.556 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.585 [INFO][5144] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" HandleID="k8s-pod-network.097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.586 [INFO][5144] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.586 [INFO][5144] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.594 [WARNING][5144] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" HandleID="k8s-pod-network.097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.594 [INFO][5144] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" HandleID="k8s-pod-network.097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--jjn2k-eth0" Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.597 [INFO][5144] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:20.603336 containerd[1461]: 2026-01-28 01:01:20.600 [INFO][5135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847" Jan 28 01:01:20.603336 containerd[1461]: time="2026-01-28T01:01:20.603296188Z" level=info msg="TearDown network for sandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\" successfully" Jan 28 01:01:20.614056 containerd[1461]: time="2026-01-28T01:01:20.613956608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:20.614056 containerd[1461]: time="2026-01-28T01:01:20.614040033Z" level=info msg="RemovePodSandbox \"097dcefb9cf4a04d07f4a0168a49fd8b0f2c9582c1b0082fbd7c716bdb9d3847\" returns successfully" Jan 28 01:01:20.614893 containerd[1461]: time="2026-01-28T01:01:20.614833160Z" level=info msg="StopPodSandbox for \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\"" Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.675 [WARNING][5161] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" WorkloadEndpoint="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.675 [INFO][5161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.676 [INFO][5161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" iface="eth0" netns="" Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.676 [INFO][5161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.676 [INFO][5161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.700 [INFO][5170] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" HandleID="k8s-pod-network.6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Workload="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.701 [INFO][5170] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.701 [INFO][5170] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.710 [WARNING][5170] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" HandleID="k8s-pod-network.6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Workload="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.710 [INFO][5170] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" HandleID="k8s-pod-network.6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Workload="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.712 [INFO][5170] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:20.718191 containerd[1461]: 2026-01-28 01:01:20.715 [INFO][5161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:20.718689 containerd[1461]: time="2026-01-28T01:01:20.718182139Z" level=info msg="TearDown network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\" successfully" Jan 28 01:01:20.718689 containerd[1461]: time="2026-01-28T01:01:20.718223456Z" level=info msg="StopPodSandbox for \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\" returns successfully" Jan 28 01:01:20.719249 containerd[1461]: time="2026-01-28T01:01:20.719208973Z" level=info msg="RemovePodSandbox for \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\"" Jan 28 01:01:20.719298 containerd[1461]: time="2026-01-28T01:01:20.719263613Z" level=info msg="Forcibly stopping sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\"" Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.766 [WARNING][5187] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" WorkloadEndpoint="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.766 [INFO][5187] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.766 [INFO][5187] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" iface="eth0" netns="" Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.766 [INFO][5187] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.766 [INFO][5187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.799 [INFO][5195] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" HandleID="k8s-pod-network.6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Workload="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.799 [INFO][5195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.799 [INFO][5195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.814 [WARNING][5195] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" HandleID="k8s-pod-network.6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Workload="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.814 [INFO][5195] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" HandleID="k8s-pod-network.6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Workload="localhost-k8s-whisker--76c797496d--tfvr4-eth0" Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.822 [INFO][5195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:20.827659 containerd[1461]: 2026-01-28 01:01:20.825 [INFO][5187] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739" Jan 28 01:01:20.828048 containerd[1461]: time="2026-01-28T01:01:20.827672770Z" level=info msg="TearDown network for sandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\" successfully" Jan 28 01:01:20.841742 containerd[1461]: time="2026-01-28T01:01:20.841644754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:20.841916 containerd[1461]: time="2026-01-28T01:01:20.841752684Z" level=info msg="RemovePodSandbox \"6484ff8fc9990687f444f7dd132626deab9e2afed406ed17a8aa584868c1f739\" returns successfully" Jan 28 01:01:20.842473 containerd[1461]: time="2026-01-28T01:01:20.842400709Z" level=info msg="StopPodSandbox for \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\"" Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.894 [WARNING][5213] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"92332efa-a852-471d-9684-4f885e3f6360", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f", Pod:"coredns-668d6bf9bc-vf6fz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7f29c9199b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.897 [INFO][5213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.897 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" iface="eth0" netns="" Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.897 [INFO][5213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.897 [INFO][5213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.926 [INFO][5222] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" HandleID="k8s-pod-network.432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.926 [INFO][5222] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.926 [INFO][5222] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.933 [WARNING][5222] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" HandleID="k8s-pod-network.432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.934 [INFO][5222] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" HandleID="k8s-pod-network.432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.935 [INFO][5222] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:20.942931 containerd[1461]: 2026-01-28 01:01:20.939 [INFO][5213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:20.942931 containerd[1461]: time="2026-01-28T01:01:20.942654311Z" level=info msg="TearDown network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\" successfully" Jan 28 01:01:20.942931 containerd[1461]: time="2026-01-28T01:01:20.942688734Z" level=info msg="StopPodSandbox for \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\" returns successfully" Jan 28 01:01:20.943703 containerd[1461]: time="2026-01-28T01:01:20.943391980Z" level=info msg="RemovePodSandbox for \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\"" Jan 28 01:01:20.943703 containerd[1461]: time="2026-01-28T01:01:20.943427526Z" level=info msg="Forcibly stopping sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\"" Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:20.985 [WARNING][5240] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"92332efa-a852-471d-9684-4f885e3f6360", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f73d4186b70095a2025516bef0f37f1ab898ccc3099a401c2933ba28782b3a8f", Pod:"coredns-668d6bf9bc-vf6fz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7f29c9199b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:20.986 [INFO][5240] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:20.986 [INFO][5240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" iface="eth0" netns="" Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:20.986 [INFO][5240] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:20.986 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:21.008 [INFO][5249] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" HandleID="k8s-pod-network.432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:21.008 [INFO][5249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:21.008 [INFO][5249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:21.015 [WARNING][5249] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" HandleID="k8s-pod-network.432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:21.015 [INFO][5249] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" HandleID="k8s-pod-network.432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Workload="localhost-k8s-coredns--668d6bf9bc--vf6fz-eth0" Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:21.016 [INFO][5249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:21.021444 containerd[1461]: 2026-01-28 01:01:21.018 [INFO][5240] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4" Jan 28 01:01:21.021915 containerd[1461]: time="2026-01-28T01:01:21.021476290Z" level=info msg="TearDown network for sandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\" successfully" Jan 28 01:01:21.025473 containerd[1461]: time="2026-01-28T01:01:21.025426301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:21.025473 containerd[1461]: time="2026-01-28T01:01:21.025486102Z" level=info msg="RemovePodSandbox \"432cdf6ebdc639bcb42365181ee466e8a13da1d7c2e493e3ad93503c0de3b9b4\" returns successfully" Jan 28 01:01:21.026140 containerd[1461]: time="2026-01-28T01:01:21.026104825Z" level=info msg="StopPodSandbox for \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\"" Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.067 [WARNING][5265] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zzx59-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91", Pod:"csi-node-driver-zzx59", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27df46633e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.068 [INFO][5265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.068 [INFO][5265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" iface="eth0" netns="" Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.068 [INFO][5265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.068 [INFO][5265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.088 [INFO][5274] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" HandleID="k8s-pod-network.7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.088 [INFO][5274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.088 [INFO][5274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.094 [WARNING][5274] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" HandleID="k8s-pod-network.7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.094 [INFO][5274] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" HandleID="k8s-pod-network.7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.096 [INFO][5274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:21.101112 containerd[1461]: 2026-01-28 01:01:21.098 [INFO][5265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:21.101704 containerd[1461]: time="2026-01-28T01:01:21.101181203Z" level=info msg="TearDown network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\" successfully" Jan 28 01:01:21.101704 containerd[1461]: time="2026-01-28T01:01:21.101206891Z" level=info msg="StopPodSandbox for \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\" returns successfully" Jan 28 01:01:21.101826 containerd[1461]: time="2026-01-28T01:01:21.101796499Z" level=info msg="RemovePodSandbox for \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\"" Jan 28 01:01:21.101860 containerd[1461]: time="2026-01-28T01:01:21.101824121Z" level=info msg="Forcibly stopping sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\"" Jan 28 01:01:21.164710 systemd-networkd[1393]: cali4595b9caba4: Gained IPv6LL Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.143 [WARNING][5294] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zzx59-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c24bc73eccc089bd984135448923dc9236e47dd078ebf68a6b90e397faf93c91", Pod:"csi-node-driver-zzx59", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali27df46633e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.143 [INFO][5294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.143 [INFO][5294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" iface="eth0" netns="" Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.143 [INFO][5294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.143 [INFO][5294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.170 [INFO][5303] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" HandleID="k8s-pod-network.7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.171 [INFO][5303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.171 [INFO][5303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.179 [WARNING][5303] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" HandleID="k8s-pod-network.7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.179 [INFO][5303] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" HandleID="k8s-pod-network.7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Workload="localhost-k8s-csi--node--driver--zzx59-eth0" Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.181 [INFO][5303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:21.186637 containerd[1461]: 2026-01-28 01:01:21.184 [INFO][5294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43" Jan 28 01:01:21.187055 containerd[1461]: time="2026-01-28T01:01:21.186671598Z" level=info msg="TearDown network for sandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\" successfully" Jan 28 01:01:21.190975 containerd[1461]: time="2026-01-28T01:01:21.190934628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:21.191056 containerd[1461]: time="2026-01-28T01:01:21.190985271Z" level=info msg="RemovePodSandbox \"7ce11f118b632d5887fb4e56b95162ddfa0bb1de234e9099b27c777b78ee7e43\" returns successfully" Jan 28 01:01:21.191839 containerd[1461]: time="2026-01-28T01:01:21.191778124Z" level=info msg="StopPodSandbox for \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\"" Jan 28 01:01:21.234058 kubelet[2539]: E0128 01:01:21.232946 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.236 [WARNING][5321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91989808-8381-42d6-9a65-8c96974c0e28", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b", Pod:"coredns-668d6bf9bc-m9pm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3583b62577e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.236 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.236 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" iface="eth0" netns="" Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.236 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.236 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.268 [INFO][5331] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" HandleID="k8s-pod-network.79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.269 [INFO][5331] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.269 [INFO][5331] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.275 [WARNING][5331] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" HandleID="k8s-pod-network.79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.275 [INFO][5331] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" HandleID="k8s-pod-network.79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.277 [INFO][5331] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:21.282782 containerd[1461]: 2026-01-28 01:01:21.280 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:21.283659 containerd[1461]: time="2026-01-28T01:01:21.282835254Z" level=info msg="TearDown network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\" successfully" Jan 28 01:01:21.283659 containerd[1461]: time="2026-01-28T01:01:21.282862023Z" level=info msg="StopPodSandbox for \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\" returns successfully" Jan 28 01:01:21.283745 containerd[1461]: time="2026-01-28T01:01:21.283701334Z" level=info msg="RemovePodSandbox for \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\"" Jan 28 01:01:21.283775 containerd[1461]: time="2026-01-28T01:01:21.283757999Z" level=info msg="Forcibly stopping sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\"" Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.319 [WARNING][5349] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91989808-8381-42d6-9a65-8c96974c0e28", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca618d6d6830a0f2cd313a8bc4dafd72db1c5bbcb9688706818bc9b1e59eb82b", Pod:"coredns-668d6bf9bc-m9pm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3583b62577e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.319 [INFO][5349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.319 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" iface="eth0" netns="" Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.319 [INFO][5349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.319 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.346 [INFO][5358] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" HandleID="k8s-pod-network.79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.346 [INFO][5358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.346 [INFO][5358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.353 [WARNING][5358] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" HandleID="k8s-pod-network.79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.353 [INFO][5358] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" HandleID="k8s-pod-network.79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Workload="localhost-k8s-coredns--668d6bf9bc--m9pm2-eth0" Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.354 [INFO][5358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:21.360475 containerd[1461]: 2026-01-28 01:01:21.357 [INFO][5349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68" Jan 28 01:01:21.360974 containerd[1461]: time="2026-01-28T01:01:21.360570231Z" level=info msg="TearDown network for sandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\" successfully" Jan 28 01:01:21.365379 containerd[1461]: time="2026-01-28T01:01:21.365269898Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:21.365379 containerd[1461]: time="2026-01-28T01:01:21.365331953Z" level=info msg="RemovePodSandbox \"79487612f042905bd94d4d19630999c2020ab45225c3d24b12b8bc62df176f68\" returns successfully" Jan 28 01:01:21.366103 containerd[1461]: time="2026-01-28T01:01:21.366064005Z" level=info msg="StopPodSandbox for \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\"" Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.408 [WARNING][5376] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0", GenerateName:"calico-apiserver-6c4fbc6c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4fbc6c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a", Pod:"calico-apiserver-6c4fbc6c9f-7hscs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4595b9caba4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.408 [INFO][5376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.408 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" iface="eth0" netns="" Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.408 [INFO][5376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.408 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.429 [INFO][5384] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" HandleID="k8s-pod-network.eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.429 [INFO][5384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.429 [INFO][5384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.435 [WARNING][5384] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" HandleID="k8s-pod-network.eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.435 [INFO][5384] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" HandleID="k8s-pod-network.eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.437 [INFO][5384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:21.442732 containerd[1461]: 2026-01-28 01:01:21.440 [INFO][5376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:21.443203 containerd[1461]: time="2026-01-28T01:01:21.442770099Z" level=info msg="TearDown network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\" successfully" Jan 28 01:01:21.443203 containerd[1461]: time="2026-01-28T01:01:21.442796840Z" level=info msg="StopPodSandbox for \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\" returns successfully" Jan 28 01:01:21.443626 containerd[1461]: time="2026-01-28T01:01:21.443532666Z" level=info msg="RemovePodSandbox for \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\"" Jan 28 01:01:21.443626 containerd[1461]: time="2026-01-28T01:01:21.443613595Z" level=info msg="Forcibly stopping sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\"" Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.484 [WARNING][5403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0", GenerateName:"calico-apiserver-6c4fbc6c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c4fbc6c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dba08052d4bd170c6e9f2b1e776fd34738ef70379a0c2fc13aa44e36f740498a", Pod:"calico-apiserver-6c4fbc6c9f-7hscs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4595b9caba4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.485 [INFO][5403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.485 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" iface="eth0" netns="" Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.485 [INFO][5403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.485 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.511 [INFO][5412] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" HandleID="k8s-pod-network.eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.511 [INFO][5412] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.511 [INFO][5412] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.519 [WARNING][5412] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" HandleID="k8s-pod-network.eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.519 [INFO][5412] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" HandleID="k8s-pod-network.eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Workload="localhost-k8s-calico--apiserver--6c4fbc6c9f--7hscs-eth0" Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.522 [INFO][5412] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:21.527066 containerd[1461]: 2026-01-28 01:01:21.524 [INFO][5403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d" Jan 28 01:01:21.527066 containerd[1461]: time="2026-01-28T01:01:21.526942514Z" level=info msg="TearDown network for sandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\" successfully" Jan 28 01:01:21.531315 containerd[1461]: time="2026-01-28T01:01:21.531262563Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:21.531315 containerd[1461]: time="2026-01-28T01:01:21.531337121Z" level=info msg="RemovePodSandbox \"eed94d4f01d21dffa66a5b318c6e4a7988e7bf94d7fa6a37b52e10cf0049149d\" returns successfully" Jan 28 01:01:28.263691 systemd[1]: Started sshd@7-10.0.0.45:22-10.0.0.1:55356.service - OpenSSH per-connection server daemon (10.0.0.1:55356). Jan 28 01:01:28.337951 sshd[5435]: Accepted publickey for core from 10.0.0.1 port 55356 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:01:28.340159 sshd[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:28.346955 systemd-logind[1447]: New session 8 of user core. Jan 28 01:01:28.356655 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:01:28.526762 sshd[5435]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:28.532656 systemd[1]: sshd@7-10.0.0.45:22-10.0.0.1:55356.service: Deactivated successfully. Jan 28 01:01:28.536107 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:01:28.537278 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:01:28.539743 systemd-logind[1447]: Removed session 8. Jan 28 01:01:29.007849 kubelet[2539]: E0128 01:01:29.007728 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:29.009485 containerd[1461]: time="2026-01-28T01:01:29.009403111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:01:29.082512 containerd[1461]: time="2026-01-28T01:01:29.082104114Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:29.085451 containerd[1461]: time="2026-01-28T01:01:29.085285236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:01:29.085577 containerd[1461]: time="2026-01-28T01:01:29.085503393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:01:29.085854 kubelet[2539]: E0128 01:01:29.085692 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:29.085854 kubelet[2539]: E0128 01:01:29.085768 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:29.086137 kubelet[2539]: E0128 01:01:29.086061 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:01bd545dd6a247eda48fc2f662e849f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cbmm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df8cd6fbb-9dvqd_calico-system(347634cc-6ade-443d-805f-7f8a4ce956c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:29.086305 containerd[1461]: time="2026-01-28T01:01:29.086226591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:01:29.159694 containerd[1461]: time="2026-01-28T01:01:29.159566847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:29.161765 containerd[1461]: time="2026-01-28T01:01:29.161666377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:01:29.161765 containerd[1461]: time="2026-01-28T01:01:29.161723630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:29.162057 kubelet[2539]: E0128 01:01:29.161987 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:29.162057 kubelet[2539]: E0128 01:01:29.162052 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:29.162578 kubelet[2539]: E0128 01:01:29.162509 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xz5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-776bf76bd-h6kxm_calico-system(5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:29.162774 containerd[1461]: time="2026-01-28T01:01:29.162625967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:01:29.164210 kubelet[2539]: E0128 01:01:29.164138 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:01:29.278841 containerd[1461]: time="2026-01-28T01:01:29.278572626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:29.281512 containerd[1461]: time="2026-01-28T01:01:29.281317949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:01:29.281661 containerd[1461]: time="2026-01-28T01:01:29.281493235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:29.281850 kubelet[2539]: E0128 01:01:29.281733 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:29.282003 kubelet[2539]: E0128 01:01:29.281857 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:29.282715 kubelet[2539]: E0128 01:01:29.282076 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbmm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df8cd6fbb-9dvqd_calico-system(347634cc-6ade-443d-805f-7f8a4ce956c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:29.283835 kubelet[2539]: E0128 01:01:29.283730 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df8cd6fbb-9dvqd" podUID="347634cc-6ade-443d-805f-7f8a4ce956c5" Jan 28 01:01:31.009213 containerd[1461]: time="2026-01-28T01:01:31.009143471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:01:31.079134 containerd[1461]: time="2026-01-28T01:01:31.079004928Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:31.081017 containerd[1461]: time="2026-01-28T01:01:31.080878169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:01:31.081184 containerd[1461]: time="2026-01-28T01:01:31.081018040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:31.081333 kubelet[2539]: E0128 01:01:31.081244 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:31.082077 kubelet[2539]: E0128 01:01:31.081337 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:31.082077 kubelet[2539]: E0128 01:01:31.081736 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdkl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c4fbc6c9f-jjn2k_calico-apiserver(e05352ea-7146-4137-a0ba-4a0cd04f63ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:31.082912 containerd[1461]: time="2026-01-28T01:01:31.081884144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:01:31.084671 kubelet[2539]: E0128 01:01:31.084591 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:01:31.144735 containerd[1461]: time="2026-01-28T01:01:31.144622148Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:31.146310 containerd[1461]: time="2026-01-28T01:01:31.146132037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:01:31.146310 containerd[1461]: time="2026-01-28T01:01:31.146270414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:01:31.146831 kubelet[2539]: E0128 01:01:31.146649 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:31.146831 kubelet[2539]: E0128 01:01:31.146748 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:31.147157 kubelet[2539]: E0128 01:01:31.147005 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zjlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zzx59_calico-system(e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:31.149652 containerd[1461]: time="2026-01-28T01:01:31.149612177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:01:31.210921 containerd[1461]: time="2026-01-28T01:01:31.210823680Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:31.212505 containerd[1461]: time="2026-01-28T01:01:31.212453619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:01:31.212661 containerd[1461]: time="2026-01-28T01:01:31.212488236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:01:31.212850 kubelet[2539]: E0128 01:01:31.212791 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:31.212929 kubelet[2539]: E0128 01:01:31.212848 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:31.213341 kubelet[2539]: E0128 01:01:31.212961 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zjlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zzx59_calico-system(e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:31.216849 kubelet[2539]: E0128 01:01:31.216699 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:01:32.008681 containerd[1461]: time="2026-01-28T01:01:32.008293428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:01:32.076860 containerd[1461]: time="2026-01-28T01:01:32.076693468Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:32.083856 containerd[1461]: time="2026-01-28T01:01:32.083802125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:01:32.084243 containerd[1461]: time="2026-01-28T01:01:32.083870394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:32.084327 kubelet[2539]: E0128 01:01:32.084234 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:32.084327 kubelet[2539]: E0128 01:01:32.084306 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:32.084768 kubelet[2539]: E0128 01:01:32.084527 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zklb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c4fbc6c9f-7hscs_calico-apiserver(1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:32.086308 kubelet[2539]: E0128 01:01:32.086261 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:01:33.008378 kubelet[2539]: E0128 01:01:33.008274 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:33.560959 systemd[1]: Started sshd@8-10.0.0.45:22-10.0.0.1:43764.service - OpenSSH per-connection server daemon (10.0.0.1:43764). Jan 28 01:01:33.596023 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 43764 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:01:33.597999 sshd[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:33.608324 systemd-logind[1447]: New session 9 of user core. Jan 28 01:01:33.617890 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:01:33.763022 sshd[5460]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:33.769792 systemd[1]: sshd@8-10.0.0.45:22-10.0.0.1:43764.service: Deactivated successfully. Jan 28 01:01:33.772565 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:01:33.773732 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:01:33.775860 systemd-logind[1447]: Removed session 9. Jan 28 01:01:34.010452 containerd[1461]: time="2026-01-28T01:01:34.010185488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:01:34.081435 containerd[1461]: time="2026-01-28T01:01:34.081278140Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:34.082793 containerd[1461]: time="2026-01-28T01:01:34.082703679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:01:34.082793 containerd[1461]: time="2026-01-28T01:01:34.082743687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:34.082944 kubelet[2539]: E0128 01:01:34.082909 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:34.083317 kubelet[2539]: E0128 01:01:34.082961 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:34.083317 kubelet[2539]: E0128 01:01:34.083082 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdzfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nrtch_calico-system(1437c929-66a0-4403-bd2f-71d8e8195954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:34.084539 kubelet[2539]: E0128 01:01:34.084328 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:01:38.788782 systemd[1]: Started sshd@9-10.0.0.45:22-10.0.0.1:43776.service - OpenSSH per-connection server daemon (10.0.0.1:43776). Jan 28 01:01:38.819724 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 43776 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:01:38.821779 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:38.827532 systemd-logind[1447]: New session 10 of user core. Jan 28 01:01:38.837679 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:01:38.972252 sshd[5475]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:38.977834 systemd[1]: sshd@9-10.0.0.45:22-10.0.0.1:43776.service: Deactivated successfully. Jan 28 01:01:38.980907 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:01:38.982954 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:01:38.984749 systemd-logind[1447]: Removed session 10. Jan 28 01:01:42.010180 kubelet[2539]: E0128 01:01:42.009837 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:01:42.013936 kubelet[2539]: E0128 01:01:42.011217 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df8cd6fbb-9dvqd" podUID="347634cc-6ade-443d-805f-7f8a4ce956c5" Jan 28 01:01:42.013936 kubelet[2539]: E0128 01:01:42.012551 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:01:42.504144 kubelet[2539]: E0128 01:01:42.504062 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:44.897273 systemd[1]: Started sshd@10-10.0.0.45:22-10.0.0.1:60250.service - OpenSSH per-connection server daemon (10.0.0.1:60250). Jan 28 01:01:44.914763 kubelet[2539]: E0128 01:01:44.912111 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:01:45.009020 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 60250 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:01:45.011777 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:45.019120 systemd-logind[1447]: New session 11 of user core. Jan 28 01:01:45.031259 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:01:46.011955 kubelet[2539]: E0128 01:01:46.011895 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:01:46.023643 kubelet[2539]: E0128 01:01:46.019029 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:01:46.209328 sshd[5517]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:46.226841 systemd[1]: sshd@10-10.0.0.45:22-10.0.0.1:60250.service: Deactivated successfully. Jan 28 01:01:46.233662 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:01:46.243796 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:01:46.267976 systemd[1]: Started sshd@11-10.0.0.45:22-10.0.0.1:60258.service - OpenSSH per-connection server daemon (10.0.0.1:60258). Jan 28 01:01:46.272405 systemd-logind[1447]: Removed session 11. Jan 28 01:01:46.321662 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 60258 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:01:46.325870 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:46.336243 systemd-logind[1447]: New session 12 of user core. Jan 28 01:01:46.398930 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:01:46.711954 sshd[5533]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:46.738307 systemd[1]: sshd@11-10.0.0.45:22-10.0.0.1:60258.service: Deactivated successfully. Jan 28 01:01:46.749282 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:01:46.751644 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:01:46.770592 systemd[1]: Started sshd@12-10.0.0.45:22-10.0.0.1:60270.service - OpenSSH per-connection server daemon (10.0.0.1:60270). Jan 28 01:01:46.773329 systemd-logind[1447]: Removed session 12. Jan 28 01:01:46.877642 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 60270 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:01:46.883640 sshd[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:46.901836 systemd-logind[1447]: New session 13 of user core. Jan 28 01:01:46.919768 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:01:47.204022 sshd[5546]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:47.209939 systemd[1]: sshd@12-10.0.0.45:22-10.0.0.1:60270.service: Deactivated successfully. Jan 28 01:01:47.215197 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:01:47.219417 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:01:47.221464 systemd-logind[1447]: Removed session 13. Jan 28 01:01:52.233609 systemd[1]: Started sshd@13-10.0.0.45:22-10.0.0.1:60286.service - OpenSSH per-connection server daemon (10.0.0.1:60286). Jan 28 01:01:52.372120 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 60286 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:01:52.376470 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:52.391941 systemd-logind[1447]: New session 14 of user core. Jan 28 01:01:52.399690 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:01:52.619783 sshd[5561]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:52.629313 systemd[1]: sshd@13-10.0.0.45:22-10.0.0.1:60286.service: Deactivated successfully. Jan 28 01:01:52.642305 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:01:52.643947 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:01:52.646737 systemd-logind[1447]: Removed session 14. Jan 28 01:01:53.009467 kubelet[2539]: E0128 01:01:53.007672 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:01:56.010101 containerd[1461]: time="2026-01-28T01:01:56.008993917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:01:56.089043 containerd[1461]: time="2026-01-28T01:01:56.088974199Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:56.090804 containerd[1461]: time="2026-01-28T01:01:56.090731757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:01:56.091092 kubelet[2539]: E0128 01:01:56.090923 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:56.091092 kubelet[2539]: E0128 01:01:56.090989 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:56.091653 containerd[1461]: time="2026-01-28T01:01:56.090819441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:01:56.091740 kubelet[2539]: E0128 01:01:56.091187 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zjlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zzx59_calico-system(e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:56.094250 containerd[1461]: time="2026-01-28T01:01:56.094207267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:01:56.175565 containerd[1461]: time="2026-01-28T01:01:56.175496382Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:56.177227 containerd[1461]: time="2026-01-28T01:01:56.177085913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:01:56.177310 containerd[1461]: time="2026-01-28T01:01:56.177204507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:01:56.177735 kubelet[2539]: E0128 01:01:56.177556 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:56.177735 kubelet[2539]: E0128 01:01:56.177642 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:56.178456 kubelet[2539]: E0128 01:01:56.178052 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9zjlj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zzx59_calico-system(e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:56.180273 kubelet[2539]: E0128 01:01:56.180174 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:01:57.008470 containerd[1461]: time="2026-01-28T01:01:57.008318675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:01:57.074910 containerd[1461]: time="2026-01-28T01:01:57.074762668Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:57.076508 containerd[1461]: time="2026-01-28T01:01:57.076328356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:01:57.076616 containerd[1461]: time="2026-01-28T01:01:57.076512363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:01:57.076838 kubelet[2539]: E0128 01:01:57.076764 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:57.076889 kubelet[2539]: E0128 01:01:57.076847 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:57.077696 kubelet[2539]: E0128 01:01:57.077118 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:01bd545dd6a247eda48fc2f662e849f3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cbmm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df8cd6fbb-9dvqd_calico-system(347634cc-6ade-443d-805f-7f8a4ce956c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:57.077914 containerd[1461]: time="2026-01-28T01:01:57.077270297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:01:57.146329 containerd[1461]: time="2026-01-28T01:01:57.146178253Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:57.147624 containerd[1461]: time="2026-01-28T01:01:57.147518684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:01:57.147690 containerd[1461]: time="2026-01-28T01:01:57.147557223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:57.147854 kubelet[2539]: E0128 01:01:57.147811 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:57.148233 kubelet[2539]: E0128 01:01:57.147874 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:57.148423 containerd[1461]: time="2026-01-28T01:01:57.148337031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:01:57.148500 kubelet[2539]: E0128 01:01:57.148263 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdzfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nrtch_calico-system(1437c929-66a0-4403-bd2f-71d8e8195954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:57.149961 kubelet[2539]: E0128 01:01:57.149872 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:01:57.213976 containerd[1461]: time="2026-01-28T01:01:57.213862645Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:57.215407 containerd[1461]: time="2026-01-28T01:01:57.215164745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:01:57.215407 containerd[1461]: time="2026-01-28T01:01:57.215391683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:57.215678 kubelet[2539]: E0128 01:01:57.215600 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:57.215678 kubelet[2539]: E0128 01:01:57.215670 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:57.216002 kubelet[2539]: E0128 01:01:57.215933 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xz5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-776bf76bd-h6kxm_calico-system(5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:57.216314 containerd[1461]: time="2026-01-28T01:01:57.216235103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:01:57.217551 kubelet[2539]: E0128 01:01:57.217507 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:01:57.282313 containerd[1461]: time="2026-01-28T01:01:57.282121645Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:57.284177 containerd[1461]: time="2026-01-28T01:01:57.284016414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:01:57.284261 containerd[1461]: time="2026-01-28T01:01:57.284139506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:57.284442 kubelet[2539]: E0128 01:01:57.284341 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:57.284442 kubelet[2539]: E0128 01:01:57.284441 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:57.284612 kubelet[2539]: E0128 01:01:57.284579 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbmm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df8cd6fbb-9dvqd_calico-system(347634cc-6ade-443d-805f-7f8a4ce956c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:57.285945 kubelet[2539]: E0128 01:01:57.285812 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df8cd6fbb-9dvqd" podUID="347634cc-6ade-443d-805f-7f8a4ce956c5" Jan 28 01:01:57.633041 systemd[1]: Started sshd@14-10.0.0.45:22-10.0.0.1:37784.service - OpenSSH per-connection server daemon (10.0.0.1:37784). Jan 28 01:01:57.677188 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 37784 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:01:57.678958 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:57.684082 systemd-logind[1447]: New session 15 of user core. Jan 28 01:01:57.698660 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:01:57.875220 sshd[5583]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:57.881709 systemd[1]: sshd@14-10.0.0.45:22-10.0.0.1:37784.service: Deactivated successfully. Jan 28 01:01:57.883933 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:01:57.885681 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:01:57.887591 systemd-logind[1447]: Removed session 15. Jan 28 01:01:59.009291 containerd[1461]: time="2026-01-28T01:01:59.009186410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:01:59.072983 containerd[1461]: time="2026-01-28T01:01:59.072875362Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:59.074322 containerd[1461]: time="2026-01-28T01:01:59.074267353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:01:59.074465 containerd[1461]: time="2026-01-28T01:01:59.074391147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:59.074720 kubelet[2539]: E0128 01:01:59.074658 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:59.075094 kubelet[2539]: E0128 01:01:59.074726 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:59.075094 kubelet[2539]: E0128 01:01:59.074917 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdkl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c4fbc6c9f-jjn2k_calico-apiserver(e05352ea-7146-4137-a0ba-4a0cd04f63ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:59.076147 kubelet[2539]: E0128 01:01:59.076112 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:02:00.008687 kubelet[2539]: E0128 01:02:00.008638 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:02:00.009920 containerd[1461]: time="2026-01-28T01:02:00.009866701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:02:00.072092 containerd[1461]: time="2026-01-28T01:02:00.071987818Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:02:00.074122 containerd[1461]: time="2026-01-28T01:02:00.073708358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:02:00.074122 containerd[1461]: time="2026-01-28T01:02:00.073883509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:02:00.074329 kubelet[2539]: E0128 01:02:00.074165 2539 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:02:00.074329 kubelet[2539]: E0128 01:02:00.074252 2539 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:02:00.074969 kubelet[2539]: E0128 01:02:00.074516 2539 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4zklb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c4fbc6c9f-7hscs_calico-apiserver(1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:02:00.076150 kubelet[2539]: E0128 01:02:00.076111 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:02:02.890951 systemd[1]: Started sshd@15-10.0.0.45:22-10.0.0.1:50192.service - OpenSSH per-connection server daemon (10.0.0.1:50192). Jan 28 01:02:02.930544 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 50192 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:02.932572 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:02.938610 systemd-logind[1447]: New session 16 of user core. Jan 28 01:02:02.948600 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:02:03.073482 sshd[5603]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:03.078523 systemd[1]: sshd@15-10.0.0.45:22-10.0.0.1:50192.service: Deactivated successfully. Jan 28 01:02:03.080855 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:02:03.082695 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:02:03.085286 systemd-logind[1447]: Removed session 16. Jan 28 01:02:07.010274 kubelet[2539]: E0128 01:02:07.010205 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:02:08.089814 systemd[1]: Started sshd@16-10.0.0.45:22-10.0.0.1:50200.service - OpenSSH per-connection server daemon (10.0.0.1:50200). Jan 28 01:02:08.128523 sshd[5618]: Accepted publickey for core from 10.0.0.1 port 50200 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:08.130753 sshd[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:08.136845 systemd-logind[1447]: New session 17 of user core. Jan 28 01:02:08.144632 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:02:08.287436 sshd[5618]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:08.296905 systemd[1]: sshd@16-10.0.0.45:22-10.0.0.1:50200.service: Deactivated successfully. Jan 28 01:02:08.299670 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:02:08.301768 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:02:08.317125 systemd[1]: Started sshd@17-10.0.0.45:22-10.0.0.1:50204.service - OpenSSH per-connection server daemon (10.0.0.1:50204). Jan 28 01:02:08.318597 systemd-logind[1447]: Removed session 17. Jan 28 01:02:08.346998 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 50204 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:08.348988 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:08.354410 systemd-logind[1447]: New session 18 of user core. Jan 28 01:02:08.362537 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:02:08.732726 sshd[5632]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:08.741112 systemd[1]: sshd@17-10.0.0.45:22-10.0.0.1:50204.service: Deactivated successfully. Jan 28 01:02:08.742828 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:02:08.746077 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:02:08.752850 systemd[1]: Started sshd@18-10.0.0.45:22-10.0.0.1:50212.service - OpenSSH per-connection server daemon (10.0.0.1:50212). Jan 28 01:02:08.754924 systemd-logind[1447]: Removed session 18. Jan 28 01:02:08.797826 sshd[5645]: Accepted publickey for core from 10.0.0.1 port 50212 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:08.799662 sshd[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:08.805308 systemd-logind[1447]: New session 19 of user core. Jan 28 01:02:08.814519 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:02:09.008135 kubelet[2539]: E0128 01:02:09.007917 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:02:09.407772 sshd[5645]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:09.420254 systemd[1]: sshd@18-10.0.0.45:22-10.0.0.1:50212.service: Deactivated successfully. Jan 28 01:02:09.427183 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:02:09.428972 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:02:09.435857 systemd[1]: Started sshd@19-10.0.0.45:22-10.0.0.1:50218.service - OpenSSH per-connection server daemon (10.0.0.1:50218). Jan 28 01:02:09.437616 systemd-logind[1447]: Removed session 19. Jan 28 01:02:09.476642 sshd[5667]: Accepted publickey for core from 10.0.0.1 port 50218 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:09.479144 sshd[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:09.485314 systemd-logind[1447]: New session 20 of user core. Jan 28 01:02:09.492604 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:02:09.761021 sshd[5667]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:09.776214 systemd[1]: sshd@19-10.0.0.45:22-10.0.0.1:50218.service: Deactivated successfully. Jan 28 01:02:09.779415 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:02:09.782532 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:02:09.790834 systemd[1]: Started sshd@20-10.0.0.45:22-10.0.0.1:50232.service - OpenSSH per-connection server daemon (10.0.0.1:50232). Jan 28 01:02:09.794144 systemd-logind[1447]: Removed session 20. Jan 28 01:02:09.822113 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 50232 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:09.824660 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:09.844024 systemd-logind[1447]: New session 21 of user core. Jan 28 01:02:09.853673 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:02:10.013851 kubelet[2539]: E0128 01:02:10.013316 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:02:10.044005 sshd[5679]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:10.050610 systemd[1]: sshd@20-10.0.0.45:22-10.0.0.1:50232.service: Deactivated successfully. Jan 28 01:02:10.053674 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:02:10.055536 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:02:10.057307 systemd-logind[1447]: Removed session 21. Jan 28 01:02:11.008719 kubelet[2539]: E0128 01:02:11.008510 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:02:11.009870 kubelet[2539]: E0128 01:02:11.009793 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:02:12.011175 kubelet[2539]: E0128 01:02:12.010923 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df8cd6fbb-9dvqd" podUID="347634cc-6ade-443d-805f-7f8a4ce956c5" Jan 28 01:02:13.009144 kubelet[2539]: E0128 01:02:13.008966 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:02:15.058201 systemd[1]: Started sshd@21-10.0.0.45:22-10.0.0.1:35706.service - OpenSSH per-connection server daemon (10.0.0.1:35706). Jan 28 01:02:15.105158 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 35706 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:15.106916 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:15.112594 systemd-logind[1447]: New session 22 of user core. Jan 28 01:02:15.122631 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:02:15.244001 sshd[5718]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:15.249602 systemd[1]: sshd@21-10.0.0.45:22-10.0.0.1:35706.service: Deactivated successfully. Jan 28 01:02:15.252892 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:02:15.254157 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:02:15.256824 systemd-logind[1447]: Removed session 22. Jan 28 01:02:20.008735 kubelet[2539]: E0128 01:02:20.008125 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:02:20.266626 systemd[1]: Started sshd@22-10.0.0.45:22-10.0.0.1:35722.service - OpenSSH per-connection server daemon (10.0.0.1:35722). Jan 28 01:02:20.302925 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 35722 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:20.305088 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:20.310913 systemd-logind[1447]: New session 23 of user core. Jan 28 01:02:20.320799 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:02:20.463430 sshd[5739]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:20.468214 systemd[1]: sshd@22-10.0.0.45:22-10.0.0.1:35722.service: Deactivated successfully. Jan 28 01:02:20.470692 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:02:20.471644 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:02:20.473547 systemd-logind[1447]: Removed session 23. Jan 28 01:02:21.018618 kubelet[2539]: E0128 01:02:21.015714 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zzx59" podUID="e2bacd2d-0f8b-4cdb-8bbf-610d4ec6ce11" Jan 28 01:02:22.008501 kubelet[2539]: E0128 01:02:22.008326 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrtch" podUID="1437c929-66a0-4403-bd2f-71d8e8195954" Jan 28 01:02:25.009622 kubelet[2539]: E0128 01:02:25.009484 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-jjn2k" podUID="e05352ea-7146-4137-a0ba-4a0cd04f63ba" Jan 28 01:02:25.011173 kubelet[2539]: E0128 01:02:25.011081 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-776bf76bd-h6kxm" podUID="5c69e2f3-7ee0-4e92-8e53-bbacc24ecf7f" Jan 28 01:02:25.011295 kubelet[2539]: E0128 01:02:25.011170 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c4fbc6c9f-7hscs" podUID="1aa8fdb6-b1a2-4d7f-81bb-ae13794164c7" Jan 28 01:02:25.475891 systemd[1]: Started sshd@23-10.0.0.45:22-10.0.0.1:54648.service - OpenSSH per-connection server daemon (10.0.0.1:54648). Jan 28 01:02:25.533693 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 54648 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:25.535730 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:25.541301 systemd-logind[1447]: New session 24 of user core. Jan 28 01:02:25.552681 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:02:25.683690 sshd[5753]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:25.686946 systemd[1]: sshd@23-10.0.0.45:22-10.0.0.1:54648.service: Deactivated successfully. Jan 28 01:02:25.689293 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:02:25.691175 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:02:25.692960 systemd-logind[1447]: Removed session 24. Jan 28 01:02:26.011945 kubelet[2539]: E0128 01:02:26.011287 2539 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df8cd6fbb-9dvqd" podUID="347634cc-6ade-443d-805f-7f8a4ce956c5" Jan 28 01:02:30.009147 kubelet[2539]: E0128 01:02:30.008974 2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:02:30.699620 systemd[1]: Started sshd@24-10.0.0.45:22-10.0.0.1:54652.service - OpenSSH per-connection server daemon (10.0.0.1:54652). Jan 28 01:02:30.736162 sshd[5769]: Accepted publickey for core from 10.0.0.1 port 54652 ssh2: RSA SHA256:ncFNMFO8r+y6VW2thGYYQiv4lgD7mbt7MO5WT0IEBK4 Jan 28 01:02:30.738064 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:30.743683 systemd-logind[1447]: New session 25 of user core. Jan 28 01:02:30.753528 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:02:30.874110 sshd[5769]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:30.878886 systemd[1]: sshd@24-10.0.0.45:22-10.0.0.1:54652.service: Deactivated successfully. Jan 28 01:02:30.881245 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:02:30.882157 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:02:30.883398 systemd-logind[1447]: Removed session 25.