Jan 24 00:29:34.984749 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:29:34.984770 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:29:34.984778 kernel: BIOS-provided physical RAM map: Jan 24 00:29:34.984784 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 24 00:29:34.984790 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 24 00:29:34.984799 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:29:34.984805 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 24 00:29:34.984811 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 24 00:29:34.984817 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:29:34.984822 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:29:34.984828 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:29:34.984834 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:29:34.984839 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 24 00:29:34.984848 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:29:34.984854 kernel: NX (Execute Disable) protection: active Jan 24 00:29:34.984860 kernel: APIC: Static calls initialized Jan 24 00:29:34.984866 kernel: SMBIOS 2.8 present. Jan 24 00:29:34.984873 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 24 00:29:34.984879 kernel: Hypervisor detected: KVM Jan 24 00:29:34.984887 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:29:34.984893 kernel: kvm-clock: using sched offset of 5770460070 cycles Jan 24 00:29:34.984899 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:29:34.984905 kernel: tsc: Detected 2000.000 MHz processor Jan 24 00:29:34.984912 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:29:34.984918 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:29:34.984924 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 24 00:29:34.984931 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:29:34.984937 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:29:34.984946 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 24 00:29:34.984952 kernel: Using GB pages for direct mapping Jan 24 00:29:34.984958 kernel: ACPI: Early table checksum verification disabled Jan 24 00:29:34.984964 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 24 00:29:34.984970 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.984976 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.984982 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.984988 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 24 00:29:34.984994 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.985003 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.985009 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.985015 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.985025 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 24 00:29:34.985031 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 24 00:29:34.985052 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 24 00:29:34.985080 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 24 00:29:34.985087 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 24 00:29:34.985094 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 24 00:29:34.985100 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 24 00:29:34.985106 kernel: No NUMA configuration found Jan 24 00:29:34.985113 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 24 00:29:34.985119 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Jan 24 00:29:34.985126 kernel: Zone ranges: Jan 24 00:29:34.985141 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:29:34.985148 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:29:34.985154 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:29:34.985161 kernel: Movable zone start for each node Jan 24 00:29:34.985167 kernel: Early memory node ranges Jan 24 00:29:34.985173 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:29:34.985180 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 24 00:29:34.985186 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:29:34.985192 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 24 00:29:34.985199 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:29:34.985207 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:29:34.985214 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 24 00:29:34.985220 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:29:34.985227 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:29:34.985233 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:29:34.985240 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:29:34.985246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:29:34.985253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:29:34.985259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:29:34.985268 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:29:34.985274 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:29:34.985280 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:29:34.985287 kernel: TSC deadline timer available Jan 24 00:29:34.985293 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:29:34.985299 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:29:34.985306 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:29:34.985312 kernel: kvm-guest: setup PV sched yield Jan 24 00:29:34.985318 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:29:34.985327 kernel: Booting paravirtualized kernel on KVM Jan 24 00:29:34.985334 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:29:34.985340 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:29:34.985347 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:29:34.985353 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:29:34.985360 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:29:34.985366 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:29:34.985372 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:29:34.985380 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:29:34.985389 kernel: random: crng init done Jan 24 00:29:34.985395 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:29:34.985402 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:29:34.985408 kernel: Fallback order for Node 0: 0 Jan 24 00:29:34.985414 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 24 00:29:34.985420 kernel: Policy zone: Normal Jan 24 00:29:34.985442 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:29:34.985449 kernel: software IO TLB: area num 2. Jan 24 00:29:34.985479 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 227300K reserved, 0K cma-reserved) Jan 24 00:29:34.985486 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:29:34.986265 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:29:34.986277 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:29:34.986284 kernel: Dynamic Preempt: voluntary Jan 24 00:29:34.986291 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:29:34.986298 kernel: rcu: RCU event tracing is enabled. Jan 24 00:29:34.986305 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:29:34.986311 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:29:34.986322 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:29:34.986328 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:29:34.986335 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:29:34.986341 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:29:34.986348 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:29:34.986354 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:29:34.986361 kernel: Console: colour VGA+ 80x25 Jan 24 00:29:34.986367 kernel: printk: console [tty0] enabled Jan 24 00:29:34.986373 kernel: printk: console [ttyS0] enabled Jan 24 00:29:34.986380 kernel: ACPI: Core revision 20230628 Jan 24 00:29:34.986389 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:29:34.986395 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:29:34.986402 kernel: x2apic enabled Jan 24 00:29:34.986416 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:29:34.986426 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:29:34.986448 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:29:34.986454 kernel: kvm-guest: setup PV IPIs Jan 24 00:29:34.986461 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:29:34.986468 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:29:34.986474 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 24 00:29:34.986481 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:29:34.986491 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:29:34.986498 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:29:34.986505 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:29:34.986511 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:29:34.986518 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:29:34.986528 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 24 00:29:34.986535 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:29:34.986542 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:29:34.986549 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:29:34.986556 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:29:34.986563 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:29:34.986570 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:29:34.986577 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:29:34.986586 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:29:34.986593 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:29:34.986599 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:29:34.986606 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:29:34.986613 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:29:34.986619 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:29:34.986626 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 24 00:29:34.986633 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 24 00:29:34.986640 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:29:34.986649 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:29:34.986655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:29:34.986662 kernel: landlock: Up and running. Jan 24 00:29:34.986669 kernel: SELinux: Initializing. Jan 24 00:29:34.986675 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:29:34.986682 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:29:34.986689 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:29:34.986696 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:29:34.986703 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:29:34.986712 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:29:34.986718 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 24 00:29:34.986725 kernel: ... version: 0 Jan 24 00:29:34.986732 kernel: ... bit width: 48 Jan 24 00:29:34.986738 kernel: ... generic registers: 6 Jan 24 00:29:34.986745 kernel: ... value mask: 0000ffffffffffff Jan 24 00:29:34.986752 kernel: ... max period: 00007fffffffffff Jan 24 00:29:34.986758 kernel: ... fixed-purpose events: 0 Jan 24 00:29:34.986765 kernel: ... event mask: 000000000000003f Jan 24 00:29:34.986774 kernel: signal: max sigframe size: 3376 Jan 24 00:29:34.986780 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:29:34.986787 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:29:34.986794 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:29:34.986800 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:29:34.986807 kernel: .... node #0, CPUs: #1 Jan 24 00:29:34.986814 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:29:34.986820 kernel: smpboot: Max logical packages: 1 Jan 24 00:29:34.986827 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 24 00:29:34.986836 kernel: devtmpfs: initialized Jan 24 00:29:34.986843 kernel: x86/mm: Memory block size: 128MB Jan 24 00:29:34.986849 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:29:34.986856 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:29:34.986863 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:29:34.986869 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:29:34.986876 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:29:34.986883 kernel: audit: type=2000 audit(1769214574.763:1): state=initialized audit_enabled=0 res=1 Jan 24 00:29:34.986890 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:29:34.986899 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:29:34.986906 kernel: cpuidle: using governor menu Jan 24 00:29:34.986912 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:29:34.986919 kernel: dca service started, version 1.12.1 Jan 24 00:29:34.986926 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:29:34.986933 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:29:34.986939 kernel: PCI: Using configuration type 1 for base access Jan 24 00:29:34.986946 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:29:34.986953 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:29:34.986962 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:29:34.986968 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:29:34.986975 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:29:34.986982 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:29:34.986988 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:29:34.986995 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:29:34.987002 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:29:34.987008 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:29:34.987015 kernel: ACPI: Interpreter enabled Jan 24 00:29:34.987024 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:29:34.987030 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:29:34.987037 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:29:34.987044 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:29:34.987050 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:29:34.987057 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:29:34.987240 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:29:34.987407 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:29:34.988331 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:29:34.988345 kernel: PCI host bridge to bus 0000:00 Jan 24 00:29:34.988532 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:29:34.988654 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:29:34.988769 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:29:34.988883 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 24 00:29:34.988997 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:29:34.989118 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 24 00:29:34.989231 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:29:34.989370 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:29:34.989542 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:29:34.989672 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:29:34.989795 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:29:34.989924 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:29:34.990048 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:29:34.990180 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Jan 24 00:29:34.990305 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Jan 24 00:29:34.993826 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:29:34.993980 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:29:34.994120 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:29:34.994255 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 24 00:29:34.994379 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:29:34.994603 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:29:34.994731 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:29:34.994862 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:29:34.994985 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:29:34.995114 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:29:34.995244 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Jan 24 00:29:34.995367 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:29:34.995551 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:29:34.995680 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:29:34.995690 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:29:34.995698 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:29:34.995704 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:29:34.995715 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:29:34.995722 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:29:34.995729 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:29:34.995735 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:29:34.995742 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:29:34.995749 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:29:34.995755 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:29:34.995762 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:29:34.995769 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:29:34.995778 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:29:34.995785 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:29:34.995792 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:29:34.995798 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:29:34.995805 kernel: iommu: Default domain type: Translated Jan 24 00:29:34.995812 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:29:34.995819 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:29:34.995825 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:29:34.995838 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 24 00:29:34.995854 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 24 00:29:34.996210 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:29:34.996337 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:29:34.996493 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:29:34.996504 kernel: vgaarb: loaded Jan 24 00:29:34.996511 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:29:34.996518 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:29:34.996525 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:29:34.996536 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:29:34.996543 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:29:34.996550 kernel: pnp: PnP ACPI init Jan 24 00:29:34.997637 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:29:34.997652 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:29:34.997660 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:29:34.997667 kernel: NET: Registered PF_INET protocol family Jan 24 00:29:34.997674 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:29:34.997681 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:29:34.997693 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:29:34.997699 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:29:34.997706 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:29:34.997713 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:29:34.997720 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:29:34.997727 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:29:34.997734 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:29:34.997741 kernel: NET: Registered PF_XDP protocol family Jan 24 00:29:34.997864 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:29:34.997981 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:29:34.998096 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:29:34.998209 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 24 00:29:34.998322 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:29:35.002261 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 24 00:29:35.002276 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:29:35.002283 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:29:35.002291 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 24 00:29:35.002302 kernel: Initialise system trusted keyrings Jan 24 00:29:35.002309 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:29:35.002316 kernel: Key type asymmetric registered Jan 24 00:29:35.002323 kernel: Asymmetric key parser 'x509' registered Jan 24 00:29:35.002330 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:29:35.002337 kernel: io scheduler mq-deadline registered Jan 24 00:29:35.002344 kernel: io scheduler kyber registered Jan 24 00:29:35.002350 kernel: io scheduler bfq registered Jan 24 00:29:35.002357 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:29:35.002367 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:29:35.002374 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:29:35.002381 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:29:35.002388 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:29:35.002395 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:29:35.002401 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:29:35.002408 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:29:35.002560 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:29:35.002576 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:29:35.002693 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:29:35.002809 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:29:34 UTC (1769214574) Jan 24 00:29:35.002925 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:29:35.002934 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:29:35.002941 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:29:35.002947 kernel: Segment Routing with IPv6 Jan 24 00:29:35.002954 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:29:35.002961 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:29:35.002971 kernel: Key type dns_resolver registered Jan 24 00:29:35.002978 kernel: IPI shorthand broadcast: enabled Jan 24 00:29:35.002985 kernel: sched_clock: Marking stable (874003970, 311872750)->(1318328580, -132451860) Jan 24 00:29:35.002992 kernel: registered taskstats version 1 Jan 24 00:29:35.002999 kernel: Loading compiled-in X.509 certificates Jan 24 00:29:35.003005 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:29:35.003012 kernel: Key type .fscrypt registered Jan 24 00:29:35.003019 kernel: Key type fscrypt-provisioning registered Jan 24 00:29:35.003026 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:29:35.003035 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:29:35.003042 kernel: ima: No architecture policies found Jan 24 00:29:35.003048 kernel: clk: Disabling unused clocks Jan 24 00:29:35.003055 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:29:35.003062 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:29:35.003069 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:29:35.003075 kernel: Run /init as init process Jan 24 00:29:35.003082 kernel: with arguments: Jan 24 00:29:35.003091 kernel: /init Jan 24 00:29:35.003098 kernel: with environment: Jan 24 00:29:35.003105 kernel: HOME=/ Jan 24 00:29:35.003112 kernel: TERM=linux Jan 24 00:29:35.003120 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:29:35.003129 systemd[1]: Detected virtualization kvm. Jan 24 00:29:35.003138 systemd[1]: Detected architecture x86-64. Jan 24 00:29:35.003145 systemd[1]: Running in initrd. Jan 24 00:29:35.003154 systemd[1]: No hostname configured, using default hostname. Jan 24 00:29:35.003161 systemd[1]: Hostname set to . Jan 24 00:29:35.003169 systemd[1]: Initializing machine ID from random generator. Jan 24 00:29:35.003176 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:29:35.003183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:29:35.003205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:29:35.003218 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:29:35.003226 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:29:35.003233 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:29:35.003241 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:29:35.003249 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:29:35.003257 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:29:35.003264 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:29:35.003274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:29:35.003282 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:29:35.003289 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:29:35.003297 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:29:35.003304 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:29:35.003312 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:29:35.003319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:29:35.003326 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:29:35.003336 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:29:35.003344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:29:35.003351 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:29:35.003359 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:29:35.003366 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:29:35.003373 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:29:35.003381 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:29:35.003390 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:29:35.003404 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:29:35.003422 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:29:35.003478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:29:35.003508 systemd-journald[178]: Collecting audit messages is disabled. Jan 24 00:29:35.003525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:29:35.003538 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:29:35.003548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:29:35.003556 systemd-journald[178]: Journal started Jan 24 00:29:35.003575 systemd-journald[178]: Runtime Journal (/run/log/journal/4d12ff61e77143dcb8a78aca2662a59b) is 8.0M, max 78.3M, 70.3M free. Jan 24 00:29:34.982791 systemd-modules-load[179]: Inserted module 'overlay' Jan 24 00:29:35.015459 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:29:35.018015 kernel: Bridge firewalling registered Jan 24 00:29:35.017282 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 24 00:29:35.102109 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:29:35.103400 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:29:35.105220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:29:35.106220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:35.114580 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:29:35.117527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:29:35.121032 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:29:35.126056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:29:35.140155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:29:35.165371 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:29:35.166361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:29:35.168147 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:29:35.175556 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:29:35.179238 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:29:35.184241 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:29:35.199780 dracut-cmdline[208]: dracut-dracut-053 Jan 24 00:29:35.200671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:29:35.207758 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:29:35.221622 systemd-resolved[210]: Positive Trust Anchors: Jan 24 00:29:35.222528 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:29:35.222558 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:29:35.226081 systemd-resolved[210]: Defaulting to hostname 'linux'. Jan 24 00:29:35.229814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:29:35.231252 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:29:35.288466 kernel: SCSI subsystem initialized Jan 24 00:29:35.299451 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:29:35.310460 kernel: iscsi: registered transport (tcp) Jan 24 00:29:35.332225 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:29:35.332262 kernel: QLogic iSCSI HBA Driver Jan 24 00:29:35.374852 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:29:35.382553 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:29:35.411548 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:29:35.411585 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:29:35.413783 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:29:35.458464 kernel: raid6: avx2x4 gen() 30659 MB/s Jan 24 00:29:35.476455 kernel: raid6: avx2x2 gen() 25456 MB/s Jan 24 00:29:35.494624 kernel: raid6: avx2x1 gen() 19873 MB/s Jan 24 00:29:35.494654 kernel: raid6: using algorithm avx2x4 gen() 30659 MB/s Jan 24 00:29:35.514829 kernel: raid6: .... xor() 5698 MB/s, rmw enabled Jan 24 00:29:35.514855 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:29:35.537461 kernel: xor: automatically using best checksumming function avx Jan 24 00:29:35.683465 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:29:35.696585 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:29:35.702597 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:29:35.724355 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 24 00:29:35.729058 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:29:35.738576 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:29:35.751851 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 24 00:29:35.785728 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:29:35.791563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:29:35.859912 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:29:35.865586 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:29:35.879669 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:29:35.883446 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:29:35.885406 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:29:35.886173 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:29:35.892656 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:29:35.913655 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:29:35.945475 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:29:36.103600 kernel: scsi host0: Virtio SCSI HBA Jan 24 00:29:36.109455 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 24 00:29:36.120644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:29:36.122033 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:29:36.124995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:29:36.149906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:29:36.150097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:36.150959 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:29:36.159869 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:29:36.165479 kernel: libata version 3.00 loaded. Jan 24 00:29:36.174618 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:29:36.174659 kernel: AES CTR mode by8 optimization enabled Jan 24 00:29:36.179309 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:29:36.179595 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:29:36.186912 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:29:36.187126 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:29:36.215468 kernel: scsi host1: ahci Jan 24 00:29:36.218473 kernel: scsi host2: ahci Jan 24 00:29:36.222449 kernel: scsi host3: ahci Jan 24 00:29:36.225482 kernel: scsi host4: ahci Jan 24 00:29:36.230710 kernel: scsi host5: ahci Jan 24 00:29:36.232474 kernel: scsi host6: ahci Jan 24 00:29:36.232659 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Jan 24 00:29:36.232679 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Jan 24 00:29:36.232690 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Jan 24 00:29:36.232700 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Jan 24 00:29:36.232710 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Jan 24 00:29:36.232720 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Jan 24 00:29:36.237515 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 24 00:29:36.238392 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 24 00:29:36.238595 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:29:36.238768 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 24 00:29:36.238946 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:29:36.242488 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:29:36.242530 kernel: GPT:9289727 != 167739391 Jan 24 00:29:36.242554 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:29:36.242566 kernel: GPT:9289727 != 167739391 Jan 24 00:29:36.242576 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:29:36.242588 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:29:36.243931 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:29:36.352465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:36.362652 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:29:36.402228 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:29:36.549525 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.549666 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.549686 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.550470 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.555457 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.555493 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.602459 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (440) Jan 24 00:29:36.606039 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 24 00:29:36.613870 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (462) Jan 24 00:29:36.613740 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 24 00:29:36.627380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:29:36.633186 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 24 00:29:36.635225 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 24 00:29:36.642595 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:29:36.649007 disk-uuid[567]: Primary Header is updated. Jan 24 00:29:36.649007 disk-uuid[567]: Secondary Entries is updated. Jan 24 00:29:36.649007 disk-uuid[567]: Secondary Header is updated. Jan 24 00:29:36.656465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:29:36.662462 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:29:37.665465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:29:37.666149 disk-uuid[568]: The operation has completed successfully. Jan 24 00:29:37.718068 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:29:37.718199 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:29:37.727560 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:29:37.731612 sh[582]: Success Jan 24 00:29:37.746470 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:29:37.793759 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:29:37.802535 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:29:37.807499 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:29:37.824512 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:29:37.824553 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:29:37.827648 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:29:37.830863 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:29:37.834886 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:29:37.842449 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:29:37.845150 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:29:37.846669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:29:37.853588 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:29:37.857593 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:29:37.872508 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:37.877928 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:29:37.877951 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:29:37.886361 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:29:37.886385 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:29:37.898351 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:29:37.902949 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:37.909633 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:29:37.916604 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:29:37.993845 ignition[687]: Ignition 2.19.0 Jan 24 00:29:37.993860 ignition[687]: Stage: fetch-offline Jan 24 00:29:37.993904 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:37.993916 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:37.994016 ignition[687]: parsed url from cmdline: "" Jan 24 00:29:37.997465 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:29:37.994021 ignition[687]: no config URL provided Jan 24 00:29:37.994027 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:29:37.994038 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:29:37.994044 ignition[687]: failed to fetch config: resource requires networking Jan 24 00:29:37.994226 ignition[687]: Ignition finished successfully Jan 24 00:29:38.009172 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:29:38.016593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:29:38.039099 systemd-networkd[769]: lo: Link UP Jan 24 00:29:38.039111 systemd-networkd[769]: lo: Gained carrier Jan 24 00:29:38.040775 systemd-networkd[769]: Enumeration completed Jan 24 00:29:38.040875 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:29:38.041619 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:29:38.041624 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:29:38.041778 systemd[1]: Reached target network.target - Network. Jan 24 00:29:38.043200 systemd-networkd[769]: eth0: Link UP Jan 24 00:29:38.043204 systemd-networkd[769]: eth0: Gained carrier Jan 24 00:29:38.043212 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:29:38.048552 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:29:38.062423 ignition[772]: Ignition 2.19.0 Jan 24 00:29:38.063533 ignition[772]: Stage: fetch Jan 24 00:29:38.064581 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:38.065467 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:38.065565 ignition[772]: parsed url from cmdline: "" Jan 24 00:29:38.065571 ignition[772]: no config URL provided Jan 24 00:29:38.065577 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:29:38.065587 ignition[772]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:29:38.065606 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 24 00:29:38.065817 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:29:38.266051 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Jan 24 00:29:38.266193 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:29:38.666583 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Jan 24 00:29:38.666746 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:29:38.866495 systemd-networkd[769]: eth0: DHCPv4 address 172.234.200.204/24, gateway 172.234.200.1 acquired from 23.205.167.181 Jan 24 00:29:39.112748 systemd-networkd[769]: eth0: Gained IPv6LL Jan 24 00:29:39.467571 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Jan 24 00:29:39.567507 ignition[772]: PUT result: OK Jan 24 00:29:39.567586 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 24 00:29:39.681845 ignition[772]: GET result: OK Jan 24 00:29:39.681949 ignition[772]: parsing config with SHA512: 334e57f08d2fc7e3404adfe572e8e790339266c01006e7a2d7d697c566a24e68e4d3e6a3188c7ca03e7ab5dc6a255f98ec451dada5beea519320b16e7afd16d0 Jan 24 00:29:39.685699 unknown[772]: fetched base config from "system" Jan 24 00:29:39.685953 ignition[772]: fetch: fetch complete Jan 24 00:29:39.685710 unknown[772]: fetched base config from "system" Jan 24 00:29:39.685961 ignition[772]: fetch: fetch passed Jan 24 00:29:39.685716 unknown[772]: fetched user config from "akamai" Jan 24 00:29:39.686023 ignition[772]: Ignition finished successfully Jan 24 00:29:39.689190 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:29:39.700624 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:29:39.713178 ignition[780]: Ignition 2.19.0 Jan 24 00:29:39.713194 ignition[780]: Stage: kargs Jan 24 00:29:39.713340 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:39.716811 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:29:39.713352 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:39.714053 ignition[780]: kargs: kargs passed Jan 24 00:29:39.714096 ignition[780]: Ignition finished successfully Jan 24 00:29:39.727577 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:29:39.741121 ignition[786]: Ignition 2.19.0 Jan 24 00:29:39.741133 ignition[786]: Stage: disks Jan 24 00:29:39.741286 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:39.748153 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:29:39.741297 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:39.766872 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:29:39.741966 ignition[786]: disks: disks passed Jan 24 00:29:39.767841 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:29:39.742007 ignition[786]: Ignition finished successfully Jan 24 00:29:39.768628 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:29:39.770476 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:29:39.772332 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:29:39.779645 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:29:39.796205 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:29:39.800125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:29:39.810706 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:29:39.898457 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:29:39.898765 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:29:39.900365 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:29:39.906592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:29:39.910293 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:29:39.912401 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:29:39.912478 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:29:39.912502 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:29:39.924404 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:29:39.941108 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (803) Jan 24 00:29:39.941128 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:39.941140 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:29:39.941150 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:29:39.941160 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:29:39.941170 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:29:39.942464 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:29:39.948833 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:29:40.001511 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:29:40.007276 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:29:40.012555 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:29:40.017161 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:29:40.111261 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:29:40.121547 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:29:40.124761 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:29:40.130291 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:29:40.136100 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:40.157674 ignition[917]: INFO : Ignition 2.19.0 Jan 24 00:29:40.157674 ignition[917]: INFO : Stage: mount Jan 24 00:29:40.162490 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:40.162490 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:40.162490 ignition[917]: INFO : mount: mount passed Jan 24 00:29:40.162490 ignition[917]: INFO : Ignition finished successfully Jan 24 00:29:40.161409 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:29:40.169538 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:29:40.172262 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:29:40.904592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:29:40.921457 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Jan 24 00:29:40.921507 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:40.927489 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:29:40.927522 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:29:40.936011 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:29:40.936043 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:29:40.938821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:29:40.965626 ignition[947]: INFO : Ignition 2.19.0 Jan 24 00:29:40.965626 ignition[947]: INFO : Stage: files Jan 24 00:29:40.967683 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:40.967683 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:40.967683 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:29:40.967683 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:29:40.967683 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:29:40.972850 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:29:40.973926 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:29:40.975070 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:29:40.974114 unknown[947]: wrote ssh authorized keys file for user: core Jan 24 00:29:40.977049 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:29:40.977049 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:29:41.277899 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:29:41.475749 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:29:42.015973 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:29:42.455772 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:29:42.455772 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:29:42.458719 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:29:42.482103 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:29:42.482103 ignition[947]: INFO : files: files passed Jan 24 00:29:42.482103 ignition[947]: INFO : Ignition finished successfully Jan 24 00:29:42.464188 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:29:42.490627 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:29:42.493750 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:29:42.496971 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:29:42.497096 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:29:42.508479 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:29:42.508479 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:29:42.511639 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:29:42.512026 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:29:42.513795 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:29:42.520552 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:29:42.555683 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:29:42.555816 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:29:42.557821 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:29:42.559062 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:29:42.560763 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:29:42.569577 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:29:42.581326 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:29:42.586559 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:29:42.595409 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:29:42.596262 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:29:42.597934 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:29:42.599519 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:29:42.599619 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:29:42.601406 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:29:42.602478 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:29:42.604042 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:29:42.605518 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:29:42.606951 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:29:42.608610 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:29:42.610207 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:29:42.611845 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:29:42.613414 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:29:42.615044 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:29:42.616570 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:29:42.616670 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:29:42.618635 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:29:42.619792 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:29:42.621456 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:29:42.622254 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:29:42.624404 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:29:42.624569 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:29:42.626798 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:29:42.626949 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:29:42.628165 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:29:42.628332 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:29:42.636662 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:29:42.637680 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:29:42.637881 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:29:42.643580 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:29:42.646490 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:29:42.647568 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:29:42.651720 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:29:42.651835 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:29:42.659702 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:29:42.662564 ignition[999]: INFO : Ignition 2.19.0 Jan 24 00:29:42.662564 ignition[999]: INFO : Stage: umount Jan 24 00:29:42.662564 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:42.662564 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:42.660493 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:29:42.673667 ignition[999]: INFO : umount: umount passed Jan 24 00:29:42.673667 ignition[999]: INFO : Ignition finished successfully Jan 24 00:29:42.667346 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:29:42.668462 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:29:42.671231 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:29:42.671317 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:29:42.672957 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:29:42.673032 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:29:42.675712 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:29:42.675763 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:29:42.678055 systemd[1]: Stopped target network.target - Network. Jan 24 00:29:42.679526 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:29:42.679583 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:29:42.682526 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:29:42.706396 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:29:42.712553 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:29:42.713383 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:29:42.715141 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:29:42.716777 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:29:42.716852 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:29:42.718577 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:29:42.718647 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:29:42.720107 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:29:42.720191 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:29:42.721711 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:29:42.721784 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:29:42.723720 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:29:42.725876 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:29:42.729028 systemd-networkd[769]: eth0: DHCPv6 lease lost Jan 24 00:29:42.729135 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:29:42.730319 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:29:42.730929 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:29:42.732685 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:29:42.732838 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:29:42.738003 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:29:42.738084 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:29:42.739752 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:29:42.739822 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:29:42.746521 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:29:42.747384 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:29:42.747464 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:29:42.754498 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:29:42.757350 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:29:42.757682 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:29:42.766639 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:29:42.766729 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:29:42.768035 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:29:42.768086 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:29:42.769669 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:29:42.769718 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:29:42.772096 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:29:42.772478 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:29:42.774119 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:29:42.774220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:29:42.776332 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:29:42.776405 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:29:42.779077 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:29:42.779120 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:29:42.780735 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:29:42.780788 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:29:42.782741 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:29:42.782790 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:29:42.784415 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:29:42.784514 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:29:42.791561 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:29:42.792930 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:29:42.792987 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:29:42.793930 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:29:42.793980 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:42.800354 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:29:42.800481 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:29:42.802228 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:29:42.809580 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:29:42.817975 systemd[1]: Switching root. Jan 24 00:29:42.851454 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 24 00:29:42.851502 systemd-journald[178]: Journal stopped Jan 24 00:29:34.984749 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:29:34.984770 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:29:34.984778 kernel: BIOS-provided physical RAM map: Jan 24 00:29:34.984784 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 24 00:29:34.984790 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 24 00:29:34.984799 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 24 00:29:34.984805 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 24 00:29:34.984811 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 24 00:29:34.984817 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 24 00:29:34.984822 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 24 00:29:34.984828 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 24 00:29:34.984834 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 24 00:29:34.984839 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 24 00:29:34.984848 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 24 00:29:34.984854 kernel: NX (Execute Disable) protection: active Jan 24 00:29:34.984860 kernel: APIC: Static calls initialized Jan 24 00:29:34.984866 kernel: SMBIOS 2.8 present. Jan 24 00:29:34.984873 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 24 00:29:34.984879 kernel: Hypervisor detected: KVM Jan 24 00:29:34.984887 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:29:34.984893 kernel: kvm-clock: using sched offset of 5770460070 cycles Jan 24 00:29:34.984899 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:29:34.984905 kernel: tsc: Detected 2000.000 MHz processor Jan 24 00:29:34.984912 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:29:34.984918 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:29:34.984924 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 24 00:29:34.984931 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 24 00:29:34.984937 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:29:34.984946 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 24 00:29:34.984952 kernel: Using GB pages for direct mapping Jan 24 00:29:34.984958 kernel: ACPI: Early table checksum verification disabled Jan 24 00:29:34.984964 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 24 00:29:34.984970 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.984976 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.984982 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.984988 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 24 00:29:34.984994 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.985003 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.985009 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.985015 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 24 00:29:34.985025 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 24 00:29:34.985031 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 24 00:29:34.985052 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 24 00:29:34.985080 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 24 00:29:34.985087 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 24 00:29:34.985094 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 24 00:29:34.985100 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 24 00:29:34.985106 kernel: No NUMA configuration found Jan 24 00:29:34.985113 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 24 00:29:34.985119 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Jan 24 00:29:34.985126 kernel: Zone ranges: Jan 24 00:29:34.985141 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:29:34.985148 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:29:34.985154 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:29:34.985161 kernel: Movable zone start for each node Jan 24 00:29:34.985167 kernel: Early memory node ranges Jan 24 00:29:34.985173 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 24 00:29:34.985180 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 24 00:29:34.985186 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 24 00:29:34.985192 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 24 00:29:34.985199 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:29:34.985207 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 24 00:29:34.985214 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 24 00:29:34.985220 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 24 00:29:34.985227 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:29:34.985233 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:29:34.985240 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 24 00:29:34.985246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:29:34.985253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:29:34.985259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:29:34.985268 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:29:34.985274 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:29:34.985280 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:29:34.985287 kernel: TSC deadline timer available Jan 24 00:29:34.985293 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:29:34.985299 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:29:34.985306 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 24 00:29:34.985312 kernel: kvm-guest: setup PV sched yield Jan 24 00:29:34.985318 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 24 00:29:34.985327 kernel: Booting paravirtualized kernel on KVM Jan 24 00:29:34.985334 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:29:34.985340 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:29:34.985347 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:29:34.985353 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:29:34.985360 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:29:34.985366 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:29:34.985372 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:29:34.985380 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:29:34.985389 kernel: random: crng init done Jan 24 00:29:34.985395 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:29:34.985402 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:29:34.985408 kernel: Fallback order for Node 0: 0 Jan 24 00:29:34.985414 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jan 24 00:29:34.985420 kernel: Policy zone: Normal Jan 24 00:29:34.985442 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:29:34.985449 kernel: software IO TLB: area num 2. Jan 24 00:29:34.985479 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 227300K reserved, 0K cma-reserved) Jan 24 00:29:34.985486 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:29:34.986265 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:29:34.986277 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:29:34.986284 kernel: Dynamic Preempt: voluntary Jan 24 00:29:34.986291 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:29:34.986298 kernel: rcu: RCU event tracing is enabled. Jan 24 00:29:34.986305 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:29:34.986311 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:29:34.986322 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:29:34.986328 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:29:34.986335 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:29:34.986341 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:29:34.986348 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:29:34.986354 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:29:34.986361 kernel: Console: colour VGA+ 80x25 Jan 24 00:29:34.986367 kernel: printk: console [tty0] enabled Jan 24 00:29:34.986373 kernel: printk: console [ttyS0] enabled Jan 24 00:29:34.986380 kernel: ACPI: Core revision 20230628 Jan 24 00:29:34.986389 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 24 00:29:34.986395 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:29:34.986402 kernel: x2apic enabled Jan 24 00:29:34.986416 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:29:34.986426 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 24 00:29:34.986448 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 24 00:29:34.986454 kernel: kvm-guest: setup PV IPIs Jan 24 00:29:34.986461 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 24 00:29:34.986468 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 24 00:29:34.986474 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Jan 24 00:29:34.986481 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 24 00:29:34.986491 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 24 00:29:34.986498 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 24 00:29:34.986505 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:29:34.986511 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:29:34.986518 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:29:34.986528 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 24 00:29:34.986535 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 24 00:29:34.986542 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 24 00:29:34.986549 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 24 00:29:34.986556 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 24 00:29:34.986563 kernel: active return thunk: srso_alias_return_thunk Jan 24 00:29:34.986570 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 24 00:29:34.986577 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 24 00:29:34.986586 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:29:34.986593 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:29:34.986599 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:29:34.986606 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:29:34.986613 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:29:34.986619 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:29:34.986626 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 24 00:29:34.986633 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 24 00:29:34.986640 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:29:34.986649 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:29:34.986655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:29:34.986662 kernel: landlock: Up and running. Jan 24 00:29:34.986669 kernel: SELinux: Initializing. Jan 24 00:29:34.986675 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:29:34.986682 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 24 00:29:34.986689 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 24 00:29:34.986696 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:29:34.986703 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:29:34.986712 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:29:34.986718 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 24 00:29:34.986725 kernel: ... version: 0 Jan 24 00:29:34.986732 kernel: ... bit width: 48 Jan 24 00:29:34.986738 kernel: ... generic registers: 6 Jan 24 00:29:34.986745 kernel: ... value mask: 0000ffffffffffff Jan 24 00:29:34.986752 kernel: ... max period: 00007fffffffffff Jan 24 00:29:34.986758 kernel: ... fixed-purpose events: 0 Jan 24 00:29:34.986765 kernel: ... event mask: 000000000000003f Jan 24 00:29:34.986774 kernel: signal: max sigframe size: 3376 Jan 24 00:29:34.986780 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:29:34.986787 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:29:34.986794 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:29:34.986800 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:29:34.986807 kernel: .... node #0, CPUs: #1 Jan 24 00:29:34.986814 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:29:34.986820 kernel: smpboot: Max logical packages: 1 Jan 24 00:29:34.986827 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Jan 24 00:29:34.986836 kernel: devtmpfs: initialized Jan 24 00:29:34.986843 kernel: x86/mm: Memory block size: 128MB Jan 24 00:29:34.986849 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:29:34.986856 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:29:34.986863 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:29:34.986869 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:29:34.986876 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:29:34.986883 kernel: audit: type=2000 audit(1769214574.763:1): state=initialized audit_enabled=0 res=1 Jan 24 00:29:34.986890 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:29:34.986899 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:29:34.986906 kernel: cpuidle: using governor menu Jan 24 00:29:34.986912 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:29:34.986919 kernel: dca service started, version 1.12.1 Jan 24 00:29:34.986926 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 24 00:29:34.986933 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 24 00:29:34.986939 kernel: PCI: Using configuration type 1 for base access Jan 24 00:29:34.986946 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:29:34.986953 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:29:34.986962 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:29:34.986968 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:29:34.986975 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:29:34.986982 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:29:34.986988 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:29:34.986995 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:29:34.987002 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:29:34.987008 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:29:34.987015 kernel: ACPI: Interpreter enabled Jan 24 00:29:34.987024 kernel: ACPI: PM: (supports S0 S3 S5) Jan 24 00:29:34.987030 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:29:34.987037 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:29:34.987044 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:29:34.987050 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 24 00:29:34.987057 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:29:34.987240 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:29:34.987407 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 24 00:29:34.988331 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 24 00:29:34.988345 kernel: PCI host bridge to bus 0000:00 Jan 24 00:29:34.988532 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:29:34.988654 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:29:34.988769 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:29:34.988883 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 24 00:29:34.988997 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 24 00:29:34.989118 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 24 00:29:34.989231 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:29:34.989370 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 24 00:29:34.989542 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 24 00:29:34.989672 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 24 00:29:34.989795 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 24 00:29:34.989924 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 24 00:29:34.990048 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:29:34.990180 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Jan 24 00:29:34.990305 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Jan 24 00:29:34.993826 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 24 00:29:34.993980 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 24 00:29:34.994120 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 24 00:29:34.994255 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 24 00:29:34.994379 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 24 00:29:34.994603 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 24 00:29:34.994731 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 24 00:29:34.994862 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 24 00:29:34.994985 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 24 00:29:34.995114 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 24 00:29:34.995244 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Jan 24 00:29:34.995367 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Jan 24 00:29:34.995551 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 24 00:29:34.995680 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 24 00:29:34.995690 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:29:34.995698 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:29:34.995704 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:29:34.995715 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:29:34.995722 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 24 00:29:34.995729 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 24 00:29:34.995735 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 24 00:29:34.995742 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 24 00:29:34.995749 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 24 00:29:34.995755 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 24 00:29:34.995762 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 24 00:29:34.995769 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 24 00:29:34.995778 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 24 00:29:34.995785 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 24 00:29:34.995792 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 24 00:29:34.995798 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 24 00:29:34.995805 kernel: iommu: Default domain type: Translated Jan 24 00:29:34.995812 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:29:34.995819 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:29:34.995825 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:29:34.995838 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 24 00:29:34.995854 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 24 00:29:34.996210 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 24 00:29:34.996337 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 24 00:29:34.996493 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:29:34.996504 kernel: vgaarb: loaded Jan 24 00:29:34.996511 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 24 00:29:34.996518 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 24 00:29:34.996525 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:29:34.996536 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:29:34.996543 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:29:34.996550 kernel: pnp: PnP ACPI init Jan 24 00:29:34.997637 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 24 00:29:34.997652 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:29:34.997660 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:29:34.997667 kernel: NET: Registered PF_INET protocol family Jan 24 00:29:34.997674 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:29:34.997681 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 24 00:29:34.997693 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:29:34.997699 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:29:34.997706 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 24 00:29:34.997713 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 24 00:29:34.997720 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:29:34.997727 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 24 00:29:34.997734 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:29:34.997741 kernel: NET: Registered PF_XDP protocol family Jan 24 00:29:34.997864 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:29:34.997981 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:29:34.998096 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:29:34.998209 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 24 00:29:34.998322 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 24 00:29:35.002261 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 24 00:29:35.002276 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:29:35.002283 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:29:35.002291 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 24 00:29:35.002302 kernel: Initialise system trusted keyrings Jan 24 00:29:35.002309 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 24 00:29:35.002316 kernel: Key type asymmetric registered Jan 24 00:29:35.002323 kernel: Asymmetric key parser 'x509' registered Jan 24 00:29:35.002330 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:29:35.002337 kernel: io scheduler mq-deadline registered Jan 24 00:29:35.002344 kernel: io scheduler kyber registered Jan 24 00:29:35.002350 kernel: io scheduler bfq registered Jan 24 00:29:35.002357 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:29:35.002367 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 24 00:29:35.002374 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 24 00:29:35.002381 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:29:35.002388 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:29:35.002395 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:29:35.002401 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:29:35.002408 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:29:35.002560 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 24 00:29:35.002576 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:29:35.002693 kernel: rtc_cmos 00:03: registered as rtc0 Jan 24 00:29:35.002809 kernel: rtc_cmos 00:03: setting system clock to 2026-01-24T00:29:34 UTC (1769214574) Jan 24 00:29:35.002925 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 24 00:29:35.002934 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 24 00:29:35.002941 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:29:35.002947 kernel: Segment Routing with IPv6 Jan 24 00:29:35.002954 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:29:35.002961 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:29:35.002971 kernel: Key type dns_resolver registered Jan 24 00:29:35.002978 kernel: IPI shorthand broadcast: enabled Jan 24 00:29:35.002985 kernel: sched_clock: Marking stable (874003970, 311872750)->(1318328580, -132451860) Jan 24 00:29:35.002992 kernel: registered taskstats version 1 Jan 24 00:29:35.002999 kernel: Loading compiled-in X.509 certificates Jan 24 00:29:35.003005 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:29:35.003012 kernel: Key type .fscrypt registered Jan 24 00:29:35.003019 kernel: Key type fscrypt-provisioning registered Jan 24 00:29:35.003026 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:29:35.003035 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:29:35.003042 kernel: ima: No architecture policies found Jan 24 00:29:35.003048 kernel: clk: Disabling unused clocks Jan 24 00:29:35.003055 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:29:35.003062 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:29:35.003069 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:29:35.003075 kernel: Run /init as init process Jan 24 00:29:35.003082 kernel: with arguments: Jan 24 00:29:35.003091 kernel: /init Jan 24 00:29:35.003098 kernel: with environment: Jan 24 00:29:35.003105 kernel: HOME=/ Jan 24 00:29:35.003112 kernel: TERM=linux Jan 24 00:29:35.003120 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:29:35.003129 systemd[1]: Detected virtualization kvm. Jan 24 00:29:35.003138 systemd[1]: Detected architecture x86-64. Jan 24 00:29:35.003145 systemd[1]: Running in initrd. Jan 24 00:29:35.003154 systemd[1]: No hostname configured, using default hostname. Jan 24 00:29:35.003161 systemd[1]: Hostname set to . Jan 24 00:29:35.003169 systemd[1]: Initializing machine ID from random generator. Jan 24 00:29:35.003176 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:29:35.003183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:29:35.003205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:29:35.003218 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:29:35.003226 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:29:35.003233 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:29:35.003241 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:29:35.003249 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:29:35.003257 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:29:35.003264 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:29:35.003274 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:29:35.003282 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:29:35.003289 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:29:35.003297 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:29:35.003304 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:29:35.003312 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:29:35.003319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:29:35.003326 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:29:35.003336 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:29:35.003344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:29:35.003351 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:29:35.003359 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:29:35.003366 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:29:35.003373 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:29:35.003381 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:29:35.003390 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:29:35.003404 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:29:35.003422 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:29:35.003478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:29:35.003508 systemd-journald[178]: Collecting audit messages is disabled. Jan 24 00:29:35.003525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:29:35.003538 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:29:35.003548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:29:35.003556 systemd-journald[178]: Journal started Jan 24 00:29:35.003575 systemd-journald[178]: Runtime Journal (/run/log/journal/4d12ff61e77143dcb8a78aca2662a59b) is 8.0M, max 78.3M, 70.3M free. Jan 24 00:29:34.982791 systemd-modules-load[179]: Inserted module 'overlay' Jan 24 00:29:35.015459 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:29:35.018015 kernel: Bridge firewalling registered Jan 24 00:29:35.017282 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 24 00:29:35.102109 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:29:35.103400 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:29:35.105220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:29:35.106220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:35.114580 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:29:35.117527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:29:35.121032 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:29:35.126056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:29:35.140155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:29:35.165371 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:29:35.166361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:29:35.168147 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:29:35.175556 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:29:35.179238 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:29:35.184241 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:29:35.199780 dracut-cmdline[208]: dracut-dracut-053 Jan 24 00:29:35.200671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:29:35.207758 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:29:35.221622 systemd-resolved[210]: Positive Trust Anchors: Jan 24 00:29:35.222528 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:29:35.222558 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:29:35.226081 systemd-resolved[210]: Defaulting to hostname 'linux'. Jan 24 00:29:35.229814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:29:35.231252 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:29:35.288466 kernel: SCSI subsystem initialized Jan 24 00:29:35.299451 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:29:35.310460 kernel: iscsi: registered transport (tcp) Jan 24 00:29:35.332225 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:29:35.332262 kernel: QLogic iSCSI HBA Driver Jan 24 00:29:35.374852 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:29:35.382553 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:29:35.411548 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:29:35.411585 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:29:35.413783 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:29:35.458464 kernel: raid6: avx2x4 gen() 30659 MB/s Jan 24 00:29:35.476455 kernel: raid6: avx2x2 gen() 25456 MB/s Jan 24 00:29:35.494624 kernel: raid6: avx2x1 gen() 19873 MB/s Jan 24 00:29:35.494654 kernel: raid6: using algorithm avx2x4 gen() 30659 MB/s Jan 24 00:29:35.514829 kernel: raid6: .... xor() 5698 MB/s, rmw enabled Jan 24 00:29:35.514855 kernel: raid6: using avx2x2 recovery algorithm Jan 24 00:29:35.537461 kernel: xor: automatically using best checksumming function avx Jan 24 00:29:35.683465 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:29:35.696585 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:29:35.702597 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:29:35.724355 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 24 00:29:35.729058 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:29:35.738576 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:29:35.751851 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 24 00:29:35.785728 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:29:35.791563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:29:35.859912 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:29:35.865586 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:29:35.879669 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:29:35.883446 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:29:35.885406 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:29:35.886173 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:29:35.892656 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:29:35.913655 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:29:35.945475 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:29:36.103600 kernel: scsi host0: Virtio SCSI HBA Jan 24 00:29:36.109455 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 24 00:29:36.120644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:29:36.122033 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:29:36.124995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:29:36.149906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:29:36.150097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:36.150959 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:29:36.159869 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:29:36.165479 kernel: libata version 3.00 loaded. Jan 24 00:29:36.174618 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:29:36.174659 kernel: AES CTR mode by8 optimization enabled Jan 24 00:29:36.179309 kernel: ahci 0000:00:1f.2: version 3.0 Jan 24 00:29:36.179595 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 24 00:29:36.186912 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 24 00:29:36.187126 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 24 00:29:36.215468 kernel: scsi host1: ahci Jan 24 00:29:36.218473 kernel: scsi host2: ahci Jan 24 00:29:36.222449 kernel: scsi host3: ahci Jan 24 00:29:36.225482 kernel: scsi host4: ahci Jan 24 00:29:36.230710 kernel: scsi host5: ahci Jan 24 00:29:36.232474 kernel: scsi host6: ahci Jan 24 00:29:36.232659 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Jan 24 00:29:36.232679 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Jan 24 00:29:36.232690 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Jan 24 00:29:36.232700 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Jan 24 00:29:36.232710 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Jan 24 00:29:36.232720 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Jan 24 00:29:36.237515 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 24 00:29:36.238392 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 24 00:29:36.238595 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:29:36.238768 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 24 00:29:36.238946 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 00:29:36.242488 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:29:36.242530 kernel: GPT:9289727 != 167739391 Jan 24 00:29:36.242554 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:29:36.242566 kernel: GPT:9289727 != 167739391 Jan 24 00:29:36.242576 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:29:36.242588 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:29:36.243931 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:29:36.352465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:36.362652 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:29:36.402228 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:29:36.549525 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.549666 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.549686 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.550470 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.555457 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.555493 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 24 00:29:36.602459 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (440) Jan 24 00:29:36.606039 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 24 00:29:36.613870 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (462) Jan 24 00:29:36.613740 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 24 00:29:36.627380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:29:36.633186 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 24 00:29:36.635225 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 24 00:29:36.642595 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:29:36.649007 disk-uuid[567]: Primary Header is updated. Jan 24 00:29:36.649007 disk-uuid[567]: Secondary Entries is updated. Jan 24 00:29:36.649007 disk-uuid[567]: Secondary Header is updated. Jan 24 00:29:36.656465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:29:36.662462 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:29:37.665465 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:29:37.666149 disk-uuid[568]: The operation has completed successfully. Jan 24 00:29:37.718068 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:29:37.718199 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:29:37.727560 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:29:37.731612 sh[582]: Success Jan 24 00:29:37.746470 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 24 00:29:37.793759 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:29:37.802535 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:29:37.807499 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:29:37.824512 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:29:37.824553 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:29:37.827648 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:29:37.830863 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:29:37.834886 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:29:37.842449 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:29:37.845150 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:29:37.846669 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:29:37.853588 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:29:37.857593 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:29:37.872508 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:37.877928 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:29:37.877951 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:29:37.886361 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:29:37.886385 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:29:37.898351 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:29:37.902949 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:37.909633 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:29:37.916604 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:29:37.993845 ignition[687]: Ignition 2.19.0 Jan 24 00:29:37.993860 ignition[687]: Stage: fetch-offline Jan 24 00:29:37.993904 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:37.993916 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:37.994016 ignition[687]: parsed url from cmdline: "" Jan 24 00:29:37.997465 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:29:37.994021 ignition[687]: no config URL provided Jan 24 00:29:37.994027 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:29:37.994038 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:29:37.994044 ignition[687]: failed to fetch config: resource requires networking Jan 24 00:29:37.994226 ignition[687]: Ignition finished successfully Jan 24 00:29:38.009172 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:29:38.016593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:29:38.039099 systemd-networkd[769]: lo: Link UP Jan 24 00:29:38.039111 systemd-networkd[769]: lo: Gained carrier Jan 24 00:29:38.040775 systemd-networkd[769]: Enumeration completed Jan 24 00:29:38.040875 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:29:38.041619 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:29:38.041624 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:29:38.041778 systemd[1]: Reached target network.target - Network. Jan 24 00:29:38.043200 systemd-networkd[769]: eth0: Link UP Jan 24 00:29:38.043204 systemd-networkd[769]: eth0: Gained carrier Jan 24 00:29:38.043212 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:29:38.048552 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:29:38.062423 ignition[772]: Ignition 2.19.0 Jan 24 00:29:38.063533 ignition[772]: Stage: fetch Jan 24 00:29:38.064581 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:38.065467 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:38.065565 ignition[772]: parsed url from cmdline: "" Jan 24 00:29:38.065571 ignition[772]: no config URL provided Jan 24 00:29:38.065577 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:29:38.065587 ignition[772]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:29:38.065606 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 24 00:29:38.065817 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:29:38.266051 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Jan 24 00:29:38.266193 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:29:38.666583 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Jan 24 00:29:38.666746 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 24 00:29:38.866495 systemd-networkd[769]: eth0: DHCPv4 address 172.234.200.204/24, gateway 172.234.200.1 acquired from 23.205.167.181 Jan 24 00:29:39.112748 systemd-networkd[769]: eth0: Gained IPv6LL Jan 24 00:29:39.467571 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Jan 24 00:29:39.567507 ignition[772]: PUT result: OK Jan 24 00:29:39.567586 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 24 00:29:39.681845 ignition[772]: GET result: OK Jan 24 00:29:39.681949 ignition[772]: parsing config with SHA512: 334e57f08d2fc7e3404adfe572e8e790339266c01006e7a2d7d697c566a24e68e4d3e6a3188c7ca03e7ab5dc6a255f98ec451dada5beea519320b16e7afd16d0 Jan 24 00:29:39.685699 unknown[772]: fetched base config from "system" Jan 24 00:29:39.685953 ignition[772]: fetch: fetch complete Jan 24 00:29:39.685710 unknown[772]: fetched base config from "system" Jan 24 00:29:39.685961 ignition[772]: fetch: fetch passed Jan 24 00:29:39.685716 unknown[772]: fetched user config from "akamai" Jan 24 00:29:39.686023 ignition[772]: Ignition finished successfully Jan 24 00:29:39.689190 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:29:39.700624 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:29:39.713178 ignition[780]: Ignition 2.19.0 Jan 24 00:29:39.713194 ignition[780]: Stage: kargs Jan 24 00:29:39.713340 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:39.716811 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:29:39.713352 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:39.714053 ignition[780]: kargs: kargs passed Jan 24 00:29:39.714096 ignition[780]: Ignition finished successfully Jan 24 00:29:39.727577 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:29:39.741121 ignition[786]: Ignition 2.19.0 Jan 24 00:29:39.741133 ignition[786]: Stage: disks Jan 24 00:29:39.741286 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:39.748153 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:29:39.741297 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:39.766872 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:29:39.741966 ignition[786]: disks: disks passed Jan 24 00:29:39.767841 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:29:39.742007 ignition[786]: Ignition finished successfully Jan 24 00:29:39.768628 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:29:39.770476 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:29:39.772332 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:29:39.779645 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:29:39.796205 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:29:39.800125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:29:39.810706 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:29:39.898457 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:29:39.898765 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:29:39.900365 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:29:39.906592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:29:39.910293 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:29:39.912401 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:29:39.912478 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:29:39.912502 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:29:39.924404 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:29:39.941108 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (803) Jan 24 00:29:39.941128 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:39.941140 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:29:39.941150 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:29:39.941160 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:29:39.941170 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:29:39.942464 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:29:39.948833 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:29:40.001511 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:29:40.007276 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:29:40.012555 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:29:40.017161 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:29:40.111261 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:29:40.121547 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:29:40.124761 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:29:40.130291 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:29:40.136100 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:40.157674 ignition[917]: INFO : Ignition 2.19.0 Jan 24 00:29:40.157674 ignition[917]: INFO : Stage: mount Jan 24 00:29:40.162490 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:40.162490 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:40.162490 ignition[917]: INFO : mount: mount passed Jan 24 00:29:40.162490 ignition[917]: INFO : Ignition finished successfully Jan 24 00:29:40.161409 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:29:40.169538 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:29:40.172262 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:29:40.904592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:29:40.921457 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Jan 24 00:29:40.921507 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:29:40.927489 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:29:40.927522 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:29:40.936011 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 24 00:29:40.936043 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:29:40.938821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:29:40.965626 ignition[947]: INFO : Ignition 2.19.0 Jan 24 00:29:40.965626 ignition[947]: INFO : Stage: files Jan 24 00:29:40.967683 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:40.967683 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:40.967683 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:29:40.967683 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:29:40.967683 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:29:40.972850 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:29:40.973926 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:29:40.975070 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:29:40.974114 unknown[947]: wrote ssh authorized keys file for user: core Jan 24 00:29:40.977049 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:29:40.977049 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:29:41.277899 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:29:41.475749 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:29:41.477136 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:29:41.485442 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:29:42.015973 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:29:42.455772 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:29:42.455772 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:29:42.458719 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:29:42.482103 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:29:42.482103 ignition[947]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:29:42.482103 ignition[947]: INFO : files: files passed Jan 24 00:29:42.482103 ignition[947]: INFO : Ignition finished successfully Jan 24 00:29:42.464188 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:29:42.490627 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:29:42.493750 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:29:42.496971 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:29:42.497096 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:29:42.508479 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:29:42.508479 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:29:42.511639 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:29:42.512026 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:29:42.513795 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:29:42.520552 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:29:42.555683 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:29:42.555816 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:29:42.557821 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:29:42.559062 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:29:42.560763 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:29:42.569577 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:29:42.581326 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:29:42.586559 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:29:42.595409 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:29:42.596262 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:29:42.597934 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:29:42.599519 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:29:42.599619 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:29:42.601406 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:29:42.602478 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:29:42.604042 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:29:42.605518 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:29:42.606951 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:29:42.608610 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:29:42.610207 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:29:42.611845 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:29:42.613414 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:29:42.615044 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:29:42.616570 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:29:42.616670 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:29:42.618635 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:29:42.619792 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:29:42.621456 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:29:42.622254 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:29:42.624404 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:29:42.624569 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:29:42.626798 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:29:42.626949 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:29:42.628165 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:29:42.628332 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:29:42.636662 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:29:42.637680 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:29:42.637881 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:29:42.643580 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:29:42.646490 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:29:42.647568 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:29:42.651720 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:29:42.651835 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:29:42.659702 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:29:42.662564 ignition[999]: INFO : Ignition 2.19.0 Jan 24 00:29:42.662564 ignition[999]: INFO : Stage: umount Jan 24 00:29:42.662564 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:29:42.662564 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 24 00:29:42.660493 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:29:42.673667 ignition[999]: INFO : umount: umount passed Jan 24 00:29:42.673667 ignition[999]: INFO : Ignition finished successfully Jan 24 00:29:42.667346 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:29:42.668462 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:29:42.671231 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:29:42.671317 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:29:42.672957 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:29:42.673032 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:29:42.675712 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:29:42.675763 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:29:42.678055 systemd[1]: Stopped target network.target - Network. Jan 24 00:29:42.679526 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:29:42.679583 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:29:42.682526 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:29:42.706396 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:29:42.712553 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:29:42.713383 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:29:42.715141 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:29:42.716777 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:29:42.716852 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:29:42.718577 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:29:42.718647 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:29:42.720107 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:29:42.720191 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:29:42.721711 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:29:42.721784 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:29:42.723720 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:29:42.725876 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:29:42.729028 systemd-networkd[769]: eth0: DHCPv6 lease lost Jan 24 00:29:42.729135 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:29:42.730319 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:29:42.730929 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:29:42.732685 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:29:42.732838 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:29:42.738003 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:29:42.738084 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:29:42.739752 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:29:42.739822 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:29:42.746521 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:29:42.747384 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:29:42.747464 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:29:42.754498 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:29:42.757350 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:29:42.757682 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:29:42.766639 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:29:42.766729 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:29:42.768035 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:29:42.768086 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:29:42.769669 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:29:42.769718 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:29:42.772096 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:29:42.772478 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:29:42.774119 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:29:42.774220 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:29:42.776332 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:29:42.776405 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:29:42.779077 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:29:42.779120 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:29:42.780735 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:29:42.780788 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:29:42.782741 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:29:42.782790 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:29:42.784415 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:29:42.784514 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:29:42.791561 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:29:42.792930 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:29:42.792987 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:29:42.793930 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:29:42.793980 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:42.800354 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:29:42.800481 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:29:42.802228 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:29:42.809580 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:29:42.817975 systemd[1]: Switching root. Jan 24 00:29:42.851454 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 24 00:29:42.851502 systemd-journald[178]: Journal stopped Jan 24 00:29:43.953151 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:29:43.953175 kernel: SELinux: policy capability open_perms=1 Jan 24 00:29:43.953186 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:29:43.953195 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:29:43.953204 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:29:43.953216 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:29:43.953226 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:29:43.953236 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:29:43.953245 kernel: audit: type=1403 audit(1769214582.990:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:29:43.953256 systemd[1]: Successfully loaded SELinux policy in 52.177ms. Jan 24 00:29:43.953267 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.450ms. Jan 24 00:29:43.953280 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:29:43.953290 systemd[1]: Detected virtualization kvm. Jan 24 00:29:43.953300 systemd[1]: Detected architecture x86-64. Jan 24 00:29:43.953310 systemd[1]: Detected first boot. Jan 24 00:29:43.953323 systemd[1]: Initializing machine ID from random generator. Jan 24 00:29:43.953333 zram_generator::config[1043]: No configuration found. Jan 24 00:29:43.953344 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:29:43.953354 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:29:43.953364 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:29:43.953373 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:29:43.953384 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:29:43.953394 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:29:43.953406 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:29:43.953417 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:29:43.953442 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:29:43.953453 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:29:43.953464 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:29:43.953474 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:29:43.953483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:29:43.953497 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:29:43.953507 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:29:43.953517 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:29:43.953528 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:29:43.953539 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:29:43.953549 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:29:43.953558 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:29:43.953568 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:29:43.953578 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:29:43.953591 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:29:43.953604 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:29:43.953614 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:29:43.953624 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:29:43.953634 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:29:43.953644 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:29:43.953655 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:29:43.953667 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:29:43.953678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:29:43.953688 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:29:43.953698 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:29:43.953708 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:29:43.953721 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:29:43.953731 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:29:43.953741 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:29:43.953753 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:29:43.953763 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:29:43.953773 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:29:43.953783 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:29:43.953794 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:29:43.953807 systemd[1]: Reached target machines.target - Containers. Jan 24 00:29:43.953817 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:29:43.953827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:29:43.953837 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:29:43.953848 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:29:43.953858 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:29:43.953868 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:29:43.953878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:29:43.953891 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:29:43.953901 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:29:43.953912 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:29:43.953922 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:29:43.953932 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:29:43.953942 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:29:43.953952 kernel: loop: module loaded Jan 24 00:29:43.953962 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:29:43.953973 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:29:43.953986 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:29:43.953997 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:29:43.954007 kernel: ACPI: bus type drm_connector registered Jan 24 00:29:43.954016 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:29:43.954026 kernel: fuse: init (API version 7.39) Jan 24 00:29:43.954036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:29:43.954046 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:29:43.954056 systemd[1]: Stopped verity-setup.service. Jan 24 00:29:43.954067 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:29:43.954097 systemd-journald[1130]: Collecting audit messages is disabled. Jan 24 00:29:43.954116 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:29:43.954127 systemd-journald[1130]: Journal started Jan 24 00:29:43.954148 systemd-journald[1130]: Runtime Journal (/run/log/journal/39a0fd87a7154e80a307df852a775842) is 8.0M, max 78.3M, 70.3M free. Jan 24 00:29:43.584667 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:29:43.600995 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:29:43.601480 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:29:43.957462 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:29:43.958276 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:29:43.959264 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:29:43.960205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:29:43.961120 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:29:43.962046 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:29:43.963101 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:29:43.964197 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:29:43.965360 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:29:43.965719 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:29:43.966923 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:29:43.967145 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:29:43.968321 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:29:43.968696 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:29:43.969791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:29:43.970007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:29:43.971217 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:29:43.971445 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:29:43.972623 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:29:43.972836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:29:43.973994 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:29:43.975114 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:29:43.976220 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:29:43.992086 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:29:43.999319 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:29:44.004926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:29:44.028473 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:29:44.028570 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:29:44.030239 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:29:44.038005 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:29:44.044595 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:29:44.046323 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:29:44.051754 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:29:44.056224 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:29:44.057078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:29:44.059106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:29:44.060285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:29:44.061848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:29:44.065534 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:29:44.068900 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:29:44.073663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:29:44.075652 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:29:44.077632 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:29:44.079551 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:29:44.105202 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:29:44.110232 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:29:44.112831 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:29:44.118908 systemd-journald[1130]: Time spent on flushing to /var/log/journal/39a0fd87a7154e80a307df852a775842 is 25.224ms for 979 entries. Jan 24 00:29:44.118908 systemd-journald[1130]: System Journal (/var/log/journal/39a0fd87a7154e80a307df852a775842) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:29:44.156130 systemd-journald[1130]: Received client request to flush runtime journal. Jan 24 00:29:44.156163 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 00:29:44.125984 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:29:44.162951 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:29:44.170692 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:29:44.174362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:29:44.174993 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:29:44.186528 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:29:44.185955 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:29:44.204289 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:29:44.213510 kernel: loop1: detected capacity change from 0 to 219144 Jan 24 00:29:44.213535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:29:44.260420 kernel: loop2: detected capacity change from 0 to 8 Jan 24 00:29:44.260791 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 24 00:29:44.260804 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jan 24 00:29:44.271915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:29:44.301456 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:29:44.352464 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:29:44.381654 kernel: loop5: detected capacity change from 0 to 219144 Jan 24 00:29:44.403499 kernel: loop6: detected capacity change from 0 to 8 Jan 24 00:29:44.407679 kernel: loop7: detected capacity change from 0 to 140768 Jan 24 00:29:44.427828 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jan 24 00:29:44.428516 (sd-merge)[1188]: Merged extensions into '/usr'. Jan 24 00:29:44.437573 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:29:44.437602 systemd[1]: Reloading... Jan 24 00:29:44.500459 zram_generator::config[1210]: No configuration found. Jan 24 00:29:44.594373 ldconfig[1158]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:29:44.655980 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:29:44.703358 systemd[1]: Reloading finished in 265 ms. Jan 24 00:29:44.732342 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:29:44.733824 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:29:44.735165 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:29:44.746745 systemd[1]: Starting ensure-sysext.service... Jan 24 00:29:44.750077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:29:44.760608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:29:44.768909 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:29:44.768924 systemd[1]: Reloading... Jan 24 00:29:44.798333 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:29:44.800021 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:29:44.803025 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:29:44.803266 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Jan 24 00:29:44.803395 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 24 00:29:44.804405 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 24 00:29:44.812838 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:29:44.812852 systemd-tmpfiles[1259]: Skipping /boot Jan 24 00:29:44.835333 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:29:44.835345 systemd-tmpfiles[1259]: Skipping /boot Jan 24 00:29:44.883467 zram_generator::config[1293]: No configuration found. Jan 24 00:29:45.020469 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:29:45.025448 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:29:45.056412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:29:45.066528 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1317) Jan 24 00:29:45.075455 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 24 00:29:45.092455 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 24 00:29:45.098960 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 24 00:29:45.099188 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 24 00:29:45.144399 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:29:45.147266 systemd[1]: Reloading finished in 377 ms. Jan 24 00:29:45.163975 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:29:45.166253 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:29:45.168944 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:29:45.174446 kernel: EDAC MC: Ver: 3.0.0 Jan 24 00:29:45.206627 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:29:45.214164 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 24 00:29:45.220235 systemd[1]: Finished ensure-sysext.service. Jan 24 00:29:45.224753 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:29:45.231915 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:29:45.235822 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:29:45.238296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:29:45.239615 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:29:45.244569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:29:45.250572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:29:45.253574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:29:45.257580 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:29:45.259611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:29:45.266506 lvm[1369]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:29:45.260511 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:29:45.268575 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:29:45.272583 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:29:45.281583 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:29:45.291589 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 24 00:29:45.298599 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:29:45.309565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:29:45.311492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:29:45.325498 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:29:45.332586 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:29:45.335930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:29:45.336321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:29:45.343715 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:29:45.344892 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:29:45.345500 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:29:45.347953 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:29:45.348351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:29:45.361068 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:29:45.361257 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:29:45.365528 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:29:45.374806 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:29:45.375768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:29:45.375841 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:29:45.384462 augenrules[1406]: No rules Jan 24 00:29:45.383764 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:29:45.388498 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:29:45.390252 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:29:45.391452 lvm[1403]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:29:45.392876 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:29:45.394975 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:29:45.403357 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:29:45.411653 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:29:45.425817 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:29:45.452496 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:29:45.557613 systemd-networkd[1381]: lo: Link UP Jan 24 00:29:45.557623 systemd-networkd[1381]: lo: Gained carrier Jan 24 00:29:45.559879 systemd-networkd[1381]: Enumeration completed Jan 24 00:29:45.561896 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:29:45.561904 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:29:45.563704 systemd-networkd[1381]: eth0: Link UP Jan 24 00:29:45.563756 systemd-networkd[1381]: eth0: Gained carrier Jan 24 00:29:45.563801 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:29:45.567632 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:29:45.577578 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:29:45.580871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:29:45.585756 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 24 00:29:45.586721 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:29:45.593224 systemd-resolved[1382]: Positive Trust Anchors: Jan 24 00:29:45.593581 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:29:45.593653 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:29:45.598258 systemd-resolved[1382]: Defaulting to hostname 'linux'. Jan 24 00:29:45.600102 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:29:45.600951 systemd[1]: Reached target network.target - Network. Jan 24 00:29:45.601671 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:29:45.602612 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:29:45.603454 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:29:45.604268 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:29:45.605468 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:29:45.606312 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:29:45.607106 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:29:45.607920 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:29:45.607949 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:29:45.608667 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:29:45.610181 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:29:45.612495 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:29:45.618785 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:29:45.620305 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:29:45.621165 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:29:45.621894 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:29:45.622654 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:29:45.622692 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:29:45.623761 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:29:45.626607 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:29:45.631624 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:29:45.636342 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:29:45.639274 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:29:45.640957 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:29:45.646611 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:29:45.649802 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:29:45.653412 jq[1434]: false Jan 24 00:29:45.653589 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:29:45.661574 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:29:45.680062 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:29:45.682960 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:29:45.683806 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:29:45.685240 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:29:45.689538 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:29:45.705188 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:29:45.705725 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:29:45.706349 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:29:45.707644 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:29:45.710771 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:29:45.712248 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:29:45.714231 jq[1450]: true Jan 24 00:29:45.734758 update_engine[1449]: I20260124 00:29:45.734462 1449 main.cc:92] Flatcar Update Engine starting Jan 24 00:29:45.747492 jq[1456]: true Jan 24 00:29:45.750115 dbus-daemon[1433]: [system] SELinux support is enabled Jan 24 00:29:45.753669 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:29:45.760132 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:29:45.760198 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:29:45.762533 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:29:45.762559 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:29:45.768030 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:29:45.769547 coreos-metadata[1432]: Jan 24 00:29:45.769 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 24 00:29:45.772389 tar[1454]: linux-amd64/LICENSE Jan 24 00:29:45.772389 tar[1454]: linux-amd64/helm Jan 24 00:29:45.771898 (ntainerd)[1458]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:29:45.772851 update_engine[1449]: I20260124 00:29:45.771397 1449 update_check_scheduler.cc:74] Next update check in 3m55s Jan 24 00:29:45.779467 extend-filesystems[1435]: Found loop4 Jan 24 00:29:45.779467 extend-filesystems[1435]: Found loop5 Jan 24 00:29:45.779467 extend-filesystems[1435]: Found loop6 Jan 24 00:29:45.779467 extend-filesystems[1435]: Found loop7 Jan 24 00:29:45.779467 extend-filesystems[1435]: Found sda Jan 24 00:29:45.779846 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:29:45.796855 extend-filesystems[1435]: Found sda1 Jan 24 00:29:45.796855 extend-filesystems[1435]: Found sda2 Jan 24 00:29:45.796855 extend-filesystems[1435]: Found sda3 Jan 24 00:29:45.796855 extend-filesystems[1435]: Found usr Jan 24 00:29:45.796855 extend-filesystems[1435]: Found sda4 Jan 24 00:29:45.796855 extend-filesystems[1435]: Found sda6 Jan 24 00:29:45.796855 extend-filesystems[1435]: Found sda7 Jan 24 00:29:45.796855 extend-filesystems[1435]: Found sda9 Jan 24 00:29:45.796855 extend-filesystems[1435]: Checking size of /dev/sda9 Jan 24 00:29:45.815287 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:29:45.816972 extend-filesystems[1435]: Resized partition /dev/sda9 Jan 24 00:29:45.819573 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:29:45.829383 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:29:45.831581 systemd[1]: Starting sshkeys.service... Jan 24 00:29:45.856779 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jan 24 00:29:45.890447 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1308) Jan 24 00:29:45.894285 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:29:45.901813 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:29:45.952226 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:29:45.958410 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:29:45.958724 systemd-logind[1448]: New seat seat0. Jan 24 00:29:45.976145 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:29:46.013049 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:29:46.040142 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:29:46.054384 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:29:46.055418 coreos-metadata[1494]: Jan 24 00:29:46.053 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 24 00:29:46.102536 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:29:46.102777 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:29:46.113713 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:29:46.152101 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:29:46.161152 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:29:46.174795 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:29:46.180066 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:29:46.181835 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:29:46.198974 containerd[1458]: time="2026-01-24T00:29:46.198653470Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:29:46.221566 containerd[1458]: time="2026-01-24T00:29:46.221527570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:46.224280 containerd[1458]: time="2026-01-24T00:29:46.224096010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:46.224280 containerd[1458]: time="2026-01-24T00:29:46.224156930Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:29:46.224280 containerd[1458]: time="2026-01-24T00:29:46.224172520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:29:46.224449 containerd[1458]: time="2026-01-24T00:29:46.224409130Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:29:46.224475 containerd[1458]: time="2026-01-24T00:29:46.224449130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:46.224576 containerd[1458]: time="2026-01-24T00:29:46.224555420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:46.224597 containerd[1458]: time="2026-01-24T00:29:46.224575270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:46.225312 containerd[1458]: time="2026-01-24T00:29:46.224829350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:46.225312 containerd[1458]: time="2026-01-24T00:29:46.224847510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:46.225312 containerd[1458]: time="2026-01-24T00:29:46.224859580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:46.225312 containerd[1458]: time="2026-01-24T00:29:46.224868760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:46.225312 containerd[1458]: time="2026-01-24T00:29:46.225000370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:46.225400 containerd[1458]: time="2026-01-24T00:29:46.225352640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:29:46.225560 containerd[1458]: time="2026-01-24T00:29:46.225535340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:29:46.225560 containerd[1458]: time="2026-01-24T00:29:46.225556880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:29:46.225849 containerd[1458]: time="2026-01-24T00:29:46.225708670Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:29:46.225913 containerd[1458]: time="2026-01-24T00:29:46.225894540Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:29:46.242020 containerd[1458]: time="2026-01-24T00:29:46.241991650Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:29:46.242139 containerd[1458]: time="2026-01-24T00:29:46.242080800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:29:46.242169 containerd[1458]: time="2026-01-24T00:29:46.242139790Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:29:46.242169 containerd[1458]: time="2026-01-24T00:29:46.242155950Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:29:46.242210 containerd[1458]: time="2026-01-24T00:29:46.242175500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:29:46.242392 containerd[1458]: time="2026-01-24T00:29:46.242317670Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:29:46.242614 containerd[1458]: time="2026-01-24T00:29:46.242594850Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242712150Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242745410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242776110Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242794150Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242807000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242828330Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242846400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242858780Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242870420Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242887910Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242903180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242928730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242950370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243251 containerd[1458]: time="2026-01-24T00:29:46.242968890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.242985100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.242999540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243012680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243023880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243034740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243070270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243091400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243110380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243129890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243155380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243186610Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243223230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243241070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243492 containerd[1458]: time="2026-01-24T00:29:46.243253930Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:29:46.243699 containerd[1458]: time="2026-01-24T00:29:46.243299330Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:29:46.243699 containerd[1458]: time="2026-01-24T00:29:46.243318250Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:29:46.243699 containerd[1458]: time="2026-01-24T00:29:46.243328610Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:29:46.243699 containerd[1458]: time="2026-01-24T00:29:46.243339400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:29:46.243699 containerd[1458]: time="2026-01-24T00:29:46.243350040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.243699 containerd[1458]: time="2026-01-24T00:29:46.243360590Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:29:46.243699 containerd[1458]: time="2026-01-24T00:29:46.243370230Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:29:46.243699 containerd[1458]: time="2026-01-24T00:29:46.243379370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:29:46.244222 containerd[1458]: time="2026-01-24T00:29:46.243859270Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:29:46.244222 containerd[1458]: time="2026-01-24T00:29:46.243955710Z" level=info msg="Connect containerd service" Jan 24 00:29:46.244222 containerd[1458]: time="2026-01-24T00:29:46.243993760Z" level=info msg="using legacy CRI server" Jan 24 00:29:46.244222 containerd[1458]: time="2026-01-24T00:29:46.244001260Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:29:46.248516 containerd[1458]: time="2026-01-24T00:29:46.248251970Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:29:46.250134 containerd[1458]: time="2026-01-24T00:29:46.250100090Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:29:46.250498 containerd[1458]: time="2026-01-24T00:29:46.250420720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:29:46.251781 containerd[1458]: time="2026-01-24T00:29:46.250542010Z" level=info msg="Start subscribing containerd event" Jan 24 00:29:46.251781 containerd[1458]: time="2026-01-24T00:29:46.250592910Z" level=info msg="Start recovering state" Jan 24 00:29:46.251781 containerd[1458]: time="2026-01-24T00:29:46.250546660Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:29:46.251781 containerd[1458]: time="2026-01-24T00:29:46.250656500Z" level=info msg="Start event monitor" Jan 24 00:29:46.251781 containerd[1458]: time="2026-01-24T00:29:46.250672560Z" level=info msg="Start snapshots syncer" Jan 24 00:29:46.251781 containerd[1458]: time="2026-01-24T00:29:46.250680970Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:29:46.251781 containerd[1458]: time="2026-01-24T00:29:46.250688120Z" level=info msg="Start streaming server" Jan 24 00:29:46.250851 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:29:46.253549 containerd[1458]: time="2026-01-24T00:29:46.250750680Z" level=info msg="containerd successfully booted in 0.053253s" Jan 24 00:29:46.276461 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jan 24 00:29:46.285492 extend-filesystems[1491]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 24 00:29:46.285492 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 24 00:29:46.285492 extend-filesystems[1491]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jan 24 00:29:46.292046 extend-filesystems[1435]: Resized filesystem in /dev/sda9 Jan 24 00:29:46.287163 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:29:46.287426 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:29:46.321511 systemd-networkd[1381]: eth0: DHCPv4 address 172.234.200.204/24, gateway 172.234.200.1 acquired from 23.205.167.181 Jan 24 00:29:46.321578 dbus-daemon[1433]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1381 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:29:46.323377 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Jan 24 00:29:46.334970 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:29:46.391781 dbus-daemon[1433]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:29:46.391892 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:29:46.392999 dbus-daemon[1433]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1532 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:29:46.403720 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:29:46.413658 polkitd[1533]: Started polkitd version 121 Jan 24 00:29:46.417945 polkitd[1533]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:29:46.418118 polkitd[1533]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:29:46.419587 polkitd[1533]: Finished loading, compiling and executing 2 rules Jan 24 00:29:46.419956 dbus-daemon[1433]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:29:46.420163 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:29:46.422375 polkitd[1533]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:29:47.830964 systemd-resolved[1382]: Clock change detected. Flushing caches. Jan 24 00:29:47.831358 systemd-timesyncd[1383]: Contacted time server 198.137.202.32:123 (0.flatcar.pool.ntp.org). Jan 24 00:29:47.831407 systemd-timesyncd[1383]: Initial clock synchronization to Sat 2026-01-24 00:29:47.830924 UTC. Jan 24 00:29:47.842990 systemd-hostnamed[1532]: Hostname set to <172-234-200-204> (transient) Jan 24 00:29:47.843020 systemd-resolved[1382]: System hostname changed to '172-234-200-204'. Jan 24 00:29:47.903342 tar[1454]: linux-amd64/README.md Jan 24 00:29:47.918081 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:29:48.052401 systemd-networkd[1381]: eth0: Gained IPv6LL Jan 24 00:29:48.055193 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:29:48.057088 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:29:48.065411 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:48.068385 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:29:48.088327 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:29:48.168288 coreos-metadata[1432]: Jan 24 00:29:48.168 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 24 00:29:48.264089 coreos-metadata[1432]: Jan 24 00:29:48.264 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 24 00:29:48.447135 coreos-metadata[1432]: Jan 24 00:29:48.447 INFO Fetch successful Jan 24 00:29:48.447135 coreos-metadata[1432]: Jan 24 00:29:48.447 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 24 00:29:48.455875 coreos-metadata[1494]: Jan 24 00:29:48.455 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 24 00:29:48.564490 coreos-metadata[1494]: Jan 24 00:29:48.564 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 24 00:29:48.704771 coreos-metadata[1494]: Jan 24 00:29:48.704 INFO Fetch successful Jan 24 00:29:48.707407 coreos-metadata[1432]: Jan 24 00:29:48.707 INFO Fetch successful Jan 24 00:29:48.724019 update-ssh-keys[1560]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:29:48.726283 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:29:48.730497 systemd[1]: Finished sshkeys.service. Jan 24 00:29:48.804416 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:29:48.806236 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:29:48.977683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:48.979933 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:29:49.017088 systemd[1]: Startup finished in 1.011s (kernel) + 8.267s (initrd) + 4.689s (userspace) = 13.969s. Jan 24 00:29:49.023103 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:29:49.478752 kubelet[1588]: E0124 00:29:49.478684 1588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:29:49.482091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:29:49.482297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:29:50.849117 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:29:50.855228 systemd[1]: Started sshd@0-172.234.200.204:22-68.220.241.50:56268.service - OpenSSH per-connection server daemon (68.220.241.50:56268). Jan 24 00:29:51.003493 sshd[1600]: Accepted publickey for core from 68.220.241.50 port 56268 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:29:51.004590 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:51.017234 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:29:51.023620 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:29:51.026923 systemd-logind[1448]: New session 1 of user core. Jan 24 00:29:51.038696 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:29:51.045405 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:29:51.064721 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:29:51.169627 systemd[1604]: Queued start job for default target default.target. Jan 24 00:29:51.184353 systemd[1604]: Created slice app.slice - User Application Slice. Jan 24 00:29:51.184382 systemd[1604]: Reached target paths.target - Paths. Jan 24 00:29:51.184397 systemd[1604]: Reached target timers.target - Timers. Jan 24 00:29:51.185983 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:29:51.199227 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:29:51.199365 systemd[1604]: Reached target sockets.target - Sockets. Jan 24 00:29:51.199382 systemd[1604]: Reached target basic.target - Basic System. Jan 24 00:29:51.199427 systemd[1604]: Reached target default.target - Main User Target. Jan 24 00:29:51.199465 systemd[1604]: Startup finished in 127ms. Jan 24 00:29:51.199785 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:29:51.211130 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:29:51.347426 systemd[1]: Started sshd@1-172.234.200.204:22-68.220.241.50:56274.service - OpenSSH per-connection server daemon (68.220.241.50:56274). Jan 24 00:29:51.498951 sshd[1615]: Accepted publickey for core from 68.220.241.50 port 56274 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:29:51.500858 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:51.507143 systemd-logind[1448]: New session 2 of user core. Jan 24 00:29:51.513146 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:29:51.632108 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:51.637323 systemd[1]: sshd@1-172.234.200.204:22-68.220.241.50:56274.service: Deactivated successfully. Jan 24 00:29:51.640120 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:29:51.642204 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:29:51.643922 systemd-logind[1448]: Removed session 2. Jan 24 00:29:51.662893 systemd[1]: Started sshd@2-172.234.200.204:22-68.220.241.50:56288.service - OpenSSH per-connection server daemon (68.220.241.50:56288). Jan 24 00:29:51.818392 sshd[1622]: Accepted publickey for core from 68.220.241.50 port 56288 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:29:51.821019 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:51.827433 systemd-logind[1448]: New session 3 of user core. Jan 24 00:29:51.839219 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:29:51.949633 sshd[1622]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:51.955834 systemd[1]: sshd@2-172.234.200.204:22-68.220.241.50:56288.service: Deactivated successfully. Jan 24 00:29:51.959428 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:29:51.960466 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:29:51.961632 systemd-logind[1448]: Removed session 3. Jan 24 00:29:51.978206 systemd[1]: Started sshd@3-172.234.200.204:22-68.220.241.50:56294.service - OpenSSH per-connection server daemon (68.220.241.50:56294). Jan 24 00:29:52.124630 sshd[1629]: Accepted publickey for core from 68.220.241.50 port 56294 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:29:52.126569 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:52.132639 systemd-logind[1448]: New session 4 of user core. Jan 24 00:29:52.142161 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:29:52.259772 sshd[1629]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:52.263295 systemd[1]: sshd@3-172.234.200.204:22-68.220.241.50:56294.service: Deactivated successfully. Jan 24 00:29:52.265581 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:29:52.266844 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:29:52.268415 systemd-logind[1448]: Removed session 4. Jan 24 00:29:52.293073 systemd[1]: Started sshd@4-172.234.200.204:22-68.220.241.50:56302.service - OpenSSH per-connection server daemon (68.220.241.50:56302). Jan 24 00:29:52.448346 sshd[1636]: Accepted publickey for core from 68.220.241.50 port 56302 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:29:52.449893 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:52.454787 systemd-logind[1448]: New session 5 of user core. Jan 24 00:29:52.462117 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:29:52.564834 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:29:52.565236 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:29:52.581271 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 24 00:29:52.602670 sshd[1636]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:52.605597 systemd[1]: sshd@4-172.234.200.204:22-68.220.241.50:56302.service: Deactivated successfully. Jan 24 00:29:52.607702 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:29:52.608884 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:29:52.610238 systemd-logind[1448]: Removed session 5. Jan 24 00:29:52.641229 systemd[1]: Started sshd@5-172.234.200.204:22-68.220.241.50:49934.service - OpenSSH per-connection server daemon (68.220.241.50:49934). Jan 24 00:29:52.805691 sshd[1644]: Accepted publickey for core from 68.220.241.50 port 49934 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:29:52.807363 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:52.811613 systemd-logind[1448]: New session 6 of user core. Jan 24 00:29:52.817121 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:29:52.919277 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:29:52.919621 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:29:52.923039 sudo[1648]: pam_unix(sudo:session): session closed for user root Jan 24 00:29:52.928644 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:29:52.928976 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:29:52.941182 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:29:52.944596 auditctl[1651]: No rules Jan 24 00:29:52.945026 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:29:52.945230 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:29:52.947602 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:29:52.974915 augenrules[1669]: No rules Jan 24 00:29:52.976335 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:29:52.977679 sudo[1647]: pam_unix(sudo:session): session closed for user root Jan 24 00:29:53.001437 sshd[1644]: pam_unix(sshd:session): session closed for user core Jan 24 00:29:53.005427 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:29:53.006108 systemd[1]: sshd@5-172.234.200.204:22-68.220.241.50:49934.service: Deactivated successfully. Jan 24 00:29:53.007645 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:29:53.008416 systemd-logind[1448]: Removed session 6. Jan 24 00:29:53.029149 systemd[1]: Started sshd@6-172.234.200.204:22-68.220.241.50:49948.service - OpenSSH per-connection server daemon (68.220.241.50:49948). Jan 24 00:29:53.181305 sshd[1677]: Accepted publickey for core from 68.220.241.50 port 49948 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:29:53.182960 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:29:53.187532 systemd-logind[1448]: New session 7 of user core. Jan 24 00:29:53.198154 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:29:53.294936 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:29:53.295304 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:29:53.549369 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:29:53.558344 (dockerd)[1695]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:29:53.812451 dockerd[1695]: time="2026-01-24T00:29:53.812308298Z" level=info msg="Starting up" Jan 24 00:29:53.880755 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2309308375-merged.mount: Deactivated successfully. Jan 24 00:29:53.906725 dockerd[1695]: time="2026-01-24T00:29:53.906697088Z" level=info msg="Loading containers: start." Jan 24 00:29:54.000179 kernel: Initializing XFRM netlink socket Jan 24 00:29:54.079410 systemd-networkd[1381]: docker0: Link UP Jan 24 00:29:54.091394 dockerd[1695]: time="2026-01-24T00:29:54.091366168Z" level=info msg="Loading containers: done." Jan 24 00:29:54.106182 dockerd[1695]: time="2026-01-24T00:29:54.106141268Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:29:54.106311 dockerd[1695]: time="2026-01-24T00:29:54.106210228Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:29:54.106311 dockerd[1695]: time="2026-01-24T00:29:54.106307048Z" level=info msg="Daemon has completed initialization" Jan 24 00:29:54.133849 dockerd[1695]: time="2026-01-24T00:29:54.133456268Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:29:54.133605 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:29:54.947882 containerd[1458]: time="2026-01-24T00:29:54.947833498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 24 00:29:55.531023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266913484.mount: Deactivated successfully. Jan 24 00:29:56.613369 containerd[1458]: time="2026-01-24T00:29:56.613313998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:56.614879 containerd[1458]: time="2026-01-24T00:29:56.614467988Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068079" Jan 24 00:29:56.614879 containerd[1458]: time="2026-01-24T00:29:56.614829888Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:56.618158 containerd[1458]: time="2026-01-24T00:29:56.618110468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:56.619321 containerd[1458]: time="2026-01-24T00:29:56.619292868Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.67141158s" Jan 24 00:29:56.619392 containerd[1458]: time="2026-01-24T00:29:56.619378038Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 24 00:29:56.620196 containerd[1458]: time="2026-01-24T00:29:56.620150718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 24 00:29:57.696916 containerd[1458]: time="2026-01-24T00:29:57.695907548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:57.696916 containerd[1458]: time="2026-01-24T00:29:57.696802128Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162446" Jan 24 00:29:57.696916 containerd[1458]: time="2026-01-24T00:29:57.696876878Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:57.699401 containerd[1458]: time="2026-01-24T00:29:57.699371298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:57.701938 containerd[1458]: time="2026-01-24T00:29:57.701912278Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.08172682s" Jan 24 00:29:57.702060 containerd[1458]: time="2026-01-24T00:29:57.702039718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 24 00:29:57.703286 containerd[1458]: time="2026-01-24T00:29:57.703259028Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 24 00:29:58.680535 containerd[1458]: time="2026-01-24T00:29:58.680474028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:58.682105 containerd[1458]: time="2026-01-24T00:29:58.681446278Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725933" Jan 24 00:29:58.682105 containerd[1458]: time="2026-01-24T00:29:58.682073668Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:58.684791 containerd[1458]: time="2026-01-24T00:29:58.684765488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:29:58.686171 containerd[1458]: time="2026-01-24T00:29:58.685859268Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 982.50766ms" Jan 24 00:29:58.686171 containerd[1458]: time="2026-01-24T00:29:58.685893658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 24 00:29:58.687375 containerd[1458]: time="2026-01-24T00:29:58.687355178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:29:59.548886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:29:59.556966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:29:59.666232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866624170.mount: Deactivated successfully. Jan 24 00:29:59.788279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:29:59.793981 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:29:59.838489 kubelet[1913]: E0124 00:29:59.838085 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:29:59.843301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:29:59.843507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:30:00.023133 containerd[1458]: time="2026-01-24T00:30:00.023048558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:00.024116 containerd[1458]: time="2026-01-24T00:30:00.023957798Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965299" Jan 24 00:30:00.025034 containerd[1458]: time="2026-01-24T00:30:00.024631498Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:00.027605 containerd[1458]: time="2026-01-24T00:30:00.027575638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:00.028503 containerd[1458]: time="2026-01-24T00:30:00.028475368Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.34109278s" Jan 24 00:30:00.028656 containerd[1458]: time="2026-01-24T00:30:00.028636728Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:30:00.029332 containerd[1458]: time="2026-01-24T00:30:00.029294128Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 24 00:30:00.541603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058378328.mount: Deactivated successfully. Jan 24 00:30:01.350628 containerd[1458]: time="2026-01-24T00:30:01.350529858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:01.351968 containerd[1458]: time="2026-01-24T00:30:01.351897118Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Jan 24 00:30:01.352778 containerd[1458]: time="2026-01-24T00:30:01.352694578Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:01.357140 containerd[1458]: time="2026-01-24T00:30:01.357079028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:01.359450 containerd[1458]: time="2026-01-24T00:30:01.358161068Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.32883275s" Jan 24 00:30:01.359450 containerd[1458]: time="2026-01-24T00:30:01.358206478Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 24 00:30:01.360034 containerd[1458]: time="2026-01-24T00:30:01.359973768Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 24 00:30:01.845497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554867580.mount: Deactivated successfully. Jan 24 00:30:01.850803 containerd[1458]: time="2026-01-24T00:30:01.850695718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:01.851539 containerd[1458]: time="2026-01-24T00:30:01.851475668Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Jan 24 00:30:01.853030 containerd[1458]: time="2026-01-24T00:30:01.852057538Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:01.854105 containerd[1458]: time="2026-01-24T00:30:01.854062028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:01.854972 containerd[1458]: time="2026-01-24T00:30:01.854944768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 494.78329ms" Jan 24 00:30:01.855107 containerd[1458]: time="2026-01-24T00:30:01.855088818Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 24 00:30:01.855748 containerd[1458]: time="2026-01-24T00:30:01.855718618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 24 00:30:02.510019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1589350531.mount: Deactivated successfully. Jan 24 00:30:04.736989 containerd[1458]: time="2026-01-24T00:30:04.736922408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:04.738039 containerd[1458]: time="2026-01-24T00:30:04.738008848Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166820" Jan 24 00:30:04.739822 containerd[1458]: time="2026-01-24T00:30:04.738461488Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:04.741287 containerd[1458]: time="2026-01-24T00:30:04.741261948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:04.742434 containerd[1458]: time="2026-01-24T00:30:04.742405498Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.88660129s" Jan 24 00:30:04.742477 containerd[1458]: time="2026-01-24T00:30:04.742436398Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 24 00:30:07.819635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:30:07.826185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:30:07.858534 systemd[1]: Reloading requested from client PID 2060 ('systemctl') (unit session-7.scope)... Jan 24 00:30:07.858553 systemd[1]: Reloading... Jan 24 00:30:07.985058 zram_generator::config[2101]: No configuration found. Jan 24 00:30:08.098730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:30:08.166476 systemd[1]: Reloading finished in 307 ms. Jan 24 00:30:08.218167 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:30:08.218274 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:30:08.218592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:30:08.222418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:30:08.384169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:30:08.384437 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:30:08.423785 kubelet[2155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:30:08.423785 kubelet[2155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:30:08.424179 kubelet[2155]: I0124 00:30:08.423831 2155 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:30:08.773569 kubelet[2155]: I0124 00:30:08.773294 2155 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:30:08.773569 kubelet[2155]: I0124 00:30:08.773326 2155 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:30:08.775710 kubelet[2155]: I0124 00:30:08.775683 2155 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:30:08.775710 kubelet[2155]: I0124 00:30:08.775702 2155 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:30:08.775963 kubelet[2155]: I0124 00:30:08.775943 2155 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:30:08.782785 kubelet[2155]: E0124 00:30:08.782523 2155 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.200.204:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.200.204:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:30:08.782861 kubelet[2155]: I0124 00:30:08.782824 2155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:30:08.787885 kubelet[2155]: E0124 00:30:08.787845 2155 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:30:08.787941 kubelet[2155]: I0124 00:30:08.787898 2155 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:30:08.791654 kubelet[2155]: I0124 00:30:08.791640 2155 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:30:08.792496 kubelet[2155]: I0124 00:30:08.792460 2155 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:30:08.792639 kubelet[2155]: I0124 00:30:08.792491 2155 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-200-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:30:08.792639 kubelet[2155]: I0124 00:30:08.792637 2155 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:30:08.792745 kubelet[2155]: I0124 00:30:08.792646 2155 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:30:08.792745 kubelet[2155]: I0124 00:30:08.792739 2155 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:30:08.794283 kubelet[2155]: I0124 00:30:08.794266 2155 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:30:08.796020 kubelet[2155]: I0124 00:30:08.795660 2155 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:30:08.796020 kubelet[2155]: I0124 00:30:08.795678 2155 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:30:08.796020 kubelet[2155]: I0124 00:30:08.795709 2155 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:30:08.796020 kubelet[2155]: I0124 00:30:08.795728 2155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:30:08.802021 kubelet[2155]: E0124 00:30:08.799984 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.200.204:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-200-204&limit=500&resourceVersion=0\": dial tcp 172.234.200.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:30:08.802021 kubelet[2155]: I0124 00:30:08.800235 2155 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:30:08.802021 kubelet[2155]: I0124 00:30:08.800618 2155 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:30:08.802021 kubelet[2155]: I0124 00:30:08.800640 2155 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:30:08.802021 kubelet[2155]: W0124 00:30:08.800683 2155 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:30:08.802021 kubelet[2155]: E0124 00:30:08.801432 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.200.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.200.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:30:08.803275 kubelet[2155]: I0124 00:30:08.803255 2155 server.go:1262] "Started kubelet" Jan 24 00:30:08.804359 kubelet[2155]: I0124 00:30:08.803987 2155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:30:08.808541 kubelet[2155]: E0124 00:30:08.807114 2155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.200.204:6443/api/v1/namespaces/default/events\": dial tcp 172.234.200.204:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-200-204.188d83502125eb2a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-200-204,UID:172-234-200-204,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-200-204,},FirstTimestamp:2026-01-24 00:30:08.803228458 +0000 UTC m=+0.414683041,LastTimestamp:2026-01-24 00:30:08.803228458 +0000 UTC m=+0.414683041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-200-204,}" Jan 24 00:30:08.808541 kubelet[2155]: I0124 00:30:08.808420 2155 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:30:08.810070 kubelet[2155]: I0124 00:30:08.809672 2155 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:30:08.813215 kubelet[2155]: I0124 00:30:08.813176 2155 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:30:08.813270 kubelet[2155]: I0124 00:30:08.813229 2155 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:30:08.813409 kubelet[2155]: I0124 00:30:08.813388 2155 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:30:08.813615 kubelet[2155]: I0124 00:30:08.813595 2155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:30:08.815171 kubelet[2155]: I0124 00:30:08.815153 2155 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:30:08.815429 kubelet[2155]: E0124 00:30:08.815413 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:08.815756 kubelet[2155]: I0124 00:30:08.815733 2155 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:30:08.815867 kubelet[2155]: I0124 00:30:08.815856 2155 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:30:08.816380 kubelet[2155]: E0124 00:30:08.816341 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.200.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.200.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:30:08.817112 kubelet[2155]: E0124 00:30:08.816505 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-204?timeout=10s\": dial tcp 172.234.200.204:6443: connect: connection refused" interval="200ms" Jan 24 00:30:08.817112 kubelet[2155]: I0124 00:30:08.816750 2155 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:30:08.817112 kubelet[2155]: I0124 00:30:08.816816 2155 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:30:08.819991 kubelet[2155]: E0124 00:30:08.819968 2155 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:30:08.826173 kubelet[2155]: I0124 00:30:08.826135 2155 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:30:08.836537 kubelet[2155]: I0124 00:30:08.836505 2155 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:30:08.837762 kubelet[2155]: I0124 00:30:08.837735 2155 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:30:08.837762 kubelet[2155]: I0124 00:30:08.837757 2155 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:30:08.837831 kubelet[2155]: I0124 00:30:08.837778 2155 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:30:08.837831 kubelet[2155]: E0124 00:30:08.837818 2155 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:30:08.847828 kubelet[2155]: E0124 00:30:08.847807 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.200.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.200.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:30:08.857934 kubelet[2155]: I0124 00:30:08.857918 2155 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:30:08.857934 kubelet[2155]: I0124 00:30:08.857931 2155 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:30:08.858070 kubelet[2155]: I0124 00:30:08.857946 2155 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:30:08.859372 kubelet[2155]: I0124 00:30:08.859358 2155 policy_none.go:49] "None policy: Start" Jan 24 00:30:08.859421 kubelet[2155]: I0124 00:30:08.859375 2155 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:30:08.859421 kubelet[2155]: I0124 00:30:08.859387 2155 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:30:08.860081 kubelet[2155]: I0124 00:30:08.860067 2155 policy_none.go:47] "Start" Jan 24 00:30:08.864862 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:30:08.882775 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:30:08.886034 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:30:08.890406 kubelet[2155]: E0124 00:30:08.889972 2155 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:30:08.890406 kubelet[2155]: I0124 00:30:08.890158 2155 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:30:08.890406 kubelet[2155]: I0124 00:30:08.890168 2155 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:30:08.890406 kubelet[2155]: I0124 00:30:08.890347 2155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:30:08.892399 kubelet[2155]: E0124 00:30:08.892299 2155 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:30:08.892399 kubelet[2155]: E0124 00:30:08.892329 2155 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-200-204\" not found" Jan 24 00:30:08.948341 systemd[1]: Created slice kubepods-burstable-pod9a86f0754cde934c7e9496786fddfaca.slice - libcontainer container kubepods-burstable-pod9a86f0754cde934c7e9496786fddfaca.slice. Jan 24 00:30:08.968722 kubelet[2155]: E0124 00:30:08.968518 2155 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-204\" not found" node="172-234-200-204" Jan 24 00:30:08.970635 systemd[1]: Created slice kubepods-burstable-pod59c06506a948a1755a6144cbdb998c3e.slice - libcontainer container kubepods-burstable-pod59c06506a948a1755a6144cbdb998c3e.slice. Jan 24 00:30:08.978557 kubelet[2155]: E0124 00:30:08.978338 2155 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-204\" not found" node="172-234-200-204" Jan 24 00:30:08.980849 systemd[1]: Created slice kubepods-burstable-pod14fda534f09383c177705513a50df241.slice - libcontainer container kubepods-burstable-pod14fda534f09383c177705513a50df241.slice. Jan 24 00:30:08.982589 kubelet[2155]: E0124 00:30:08.982572 2155 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-204\" not found" node="172-234-200-204" Jan 24 00:30:08.991744 kubelet[2155]: I0124 00:30:08.991728 2155 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-204" Jan 24 00:30:08.992034 kubelet[2155]: E0124 00:30:08.991991 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.200.204:6443/api/v1/nodes\": dial tcp 172.234.200.204:6443: connect: connection refused" node="172-234-200-204" Jan 24 00:30:09.017454 kubelet[2155]: I0124 00:30:09.017415 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-kubeconfig\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:09.017570 kubelet[2155]: I0124 00:30:09.017457 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a86f0754cde934c7e9496786fddfaca-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-200-204\" (UID: \"9a86f0754cde934c7e9496786fddfaca\") " pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:09.017570 kubelet[2155]: I0124 00:30:09.017488 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-flexvolume-dir\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:09.017570 kubelet[2155]: I0124 00:30:09.017501 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-k8s-certs\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:09.017570 kubelet[2155]: I0124 00:30:09.017520 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:09.017570 kubelet[2155]: I0124 00:30:09.017534 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14fda534f09383c177705513a50df241-kubeconfig\") pod \"kube-scheduler-172-234-200-204\" (UID: \"14fda534f09383c177705513a50df241\") " pod="kube-system/kube-scheduler-172-234-200-204" Jan 24 00:30:09.017683 kubelet[2155]: I0124 00:30:09.017547 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a86f0754cde934c7e9496786fddfaca-ca-certs\") pod \"kube-apiserver-172-234-200-204\" (UID: \"9a86f0754cde934c7e9496786fddfaca\") " pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:09.017683 kubelet[2155]: I0124 00:30:09.017558 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a86f0754cde934c7e9496786fddfaca-k8s-certs\") pod \"kube-apiserver-172-234-200-204\" (UID: \"9a86f0754cde934c7e9496786fddfaca\") " pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:09.017683 kubelet[2155]: I0124 00:30:09.017575 2155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-ca-certs\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:09.018103 kubelet[2155]: E0124 00:30:09.018076 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-204?timeout=10s\": dial tcp 172.234.200.204:6443: connect: connection refused" interval="400ms" Jan 24 00:30:09.194450 kubelet[2155]: I0124 00:30:09.194143 2155 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-204" Jan 24 00:30:09.194450 kubelet[2155]: E0124 00:30:09.194405 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.200.204:6443/api/v1/nodes\": dial tcp 172.234.200.204:6443: connect: connection refused" node="172-234-200-204" Jan 24 00:30:09.270189 kubelet[2155]: E0124 00:30:09.270160 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:09.270929 containerd[1458]: time="2026-01-24T00:30:09.270896588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-200-204,Uid:9a86f0754cde934c7e9496786fddfaca,Namespace:kube-system,Attempt:0,}" Jan 24 00:30:09.280488 kubelet[2155]: E0124 00:30:09.280438 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:09.281024 containerd[1458]: time="2026-01-24T00:30:09.280800888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-200-204,Uid:59c06506a948a1755a6144cbdb998c3e,Namespace:kube-system,Attempt:0,}" Jan 24 00:30:09.283709 kubelet[2155]: E0124 00:30:09.283692 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:09.283976 containerd[1458]: time="2026-01-24T00:30:09.283948628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-200-204,Uid:14fda534f09383c177705513a50df241,Namespace:kube-system,Attempt:0,}" Jan 24 00:30:09.419170 kubelet[2155]: E0124 00:30:09.419133 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-204?timeout=10s\": dial tcp 172.234.200.204:6443: connect: connection refused" interval="800ms" Jan 24 00:30:09.596113 kubelet[2155]: I0124 00:30:09.596082 2155 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-204" Jan 24 00:30:09.596520 kubelet[2155]: E0124 00:30:09.596338 2155 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.200.204:6443/api/v1/nodes\": dial tcp 172.234.200.204:6443: connect: connection refused" node="172-234-200-204" Jan 24 00:30:09.706462 kubelet[2155]: E0124 00:30:09.706371 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.200.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.200.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:30:09.713121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664149882.mount: Deactivated successfully. Jan 24 00:30:09.718380 containerd[1458]: time="2026-01-24T00:30:09.718345528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:30:09.719735 containerd[1458]: time="2026-01-24T00:30:09.719710548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Jan 24 00:30:09.720108 containerd[1458]: time="2026-01-24T00:30:09.720085458Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:30:09.721980 containerd[1458]: time="2026-01-24T00:30:09.720761908Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:30:09.722279 containerd[1458]: time="2026-01-24T00:30:09.722240268Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:30:09.723185 containerd[1458]: time="2026-01-24T00:30:09.723158848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:30:09.723710 containerd[1458]: time="2026-01-24T00:30:09.723680138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:30:09.724428 containerd[1458]: time="2026-01-24T00:30:09.724407058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:30:09.726187 containerd[1458]: time="2026-01-24T00:30:09.726165588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.30181ms" Jan 24 00:30:09.727830 containerd[1458]: time="2026-01-24T00:30:09.727809548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 456.84176ms" Jan 24 00:30:09.729533 containerd[1458]: time="2026-01-24T00:30:09.729511308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.4811ms" Jan 24 00:30:09.835797 containerd[1458]: time="2026-01-24T00:30:09.835554348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:09.835797 containerd[1458]: time="2026-01-24T00:30:09.835620188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:09.835797 containerd[1458]: time="2026-01-24T00:30:09.835633518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:09.835797 containerd[1458]: time="2026-01-24T00:30:09.835709368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:09.837570 containerd[1458]: time="2026-01-24T00:30:09.837426608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:09.837570 containerd[1458]: time="2026-01-24T00:30:09.837484018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:09.837570 containerd[1458]: time="2026-01-24T00:30:09.837495098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:09.838125 containerd[1458]: time="2026-01-24T00:30:09.837917258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:09.842021 containerd[1458]: time="2026-01-24T00:30:09.841537098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:09.842021 containerd[1458]: time="2026-01-24T00:30:09.841571518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:09.842021 containerd[1458]: time="2026-01-24T00:30:09.841950018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:09.842382 containerd[1458]: time="2026-01-24T00:30:09.842132888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:09.862746 kubelet[2155]: E0124 00:30:09.862661 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.200.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.200.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:30:09.869152 systemd[1]: Started cri-containerd-eaf0d7c53d6ca90e9221d9c1c3aed260eedd020b2419a220fe34f951dd86e908.scope - libcontainer container eaf0d7c53d6ca90e9221d9c1c3aed260eedd020b2419a220fe34f951dd86e908. Jan 24 00:30:09.875375 systemd[1]: Started cri-containerd-02dfd4c6cd1c7c5dd2e8aeac509efd3b4f3d8bce1240e95f2d0ee2fc2f0eeadc.scope - libcontainer container 02dfd4c6cd1c7c5dd2e8aeac509efd3b4f3d8bce1240e95f2d0ee2fc2f0eeadc. Jan 24 00:30:09.893249 systemd[1]: Started cri-containerd-b6a3c0a4b5b98e258dd2fd9039fca426dddf6d0d0bcdeef411c795ec60a3154b.scope - libcontainer container b6a3c0a4b5b98e258dd2fd9039fca426dddf6d0d0bcdeef411c795ec60a3154b. Jan 24 00:30:09.953208 containerd[1458]: time="2026-01-24T00:30:09.953169978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-200-204,Uid:14fda534f09383c177705513a50df241,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaf0d7c53d6ca90e9221d9c1c3aed260eedd020b2419a220fe34f951dd86e908\"" Jan 24 00:30:09.954806 kubelet[2155]: E0124 00:30:09.954779 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:09.961016 containerd[1458]: time="2026-01-24T00:30:09.958776188Z" level=info msg="CreateContainer within sandbox \"eaf0d7c53d6ca90e9221d9c1c3aed260eedd020b2419a220fe34f951dd86e908\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:30:09.978343 containerd[1458]: time="2026-01-24T00:30:09.978308288Z" level=info msg="CreateContainer within sandbox \"eaf0d7c53d6ca90e9221d9c1c3aed260eedd020b2419a220fe34f951dd86e908\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2a486e1f46a1d116ce5afd6d7760ff153768871029852341279f4100e7e4001c\"" Jan 24 00:30:09.979091 containerd[1458]: time="2026-01-24T00:30:09.978991528Z" level=info msg="StartContainer for \"2a486e1f46a1d116ce5afd6d7760ff153768871029852341279f4100e7e4001c\"" Jan 24 00:30:09.990991 containerd[1458]: time="2026-01-24T00:30:09.990949968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-200-204,Uid:59c06506a948a1755a6144cbdb998c3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"02dfd4c6cd1c7c5dd2e8aeac509efd3b4f3d8bce1240e95f2d0ee2fc2f0eeadc\"" Jan 24 00:30:09.991748 kubelet[2155]: E0124 00:30:09.991707 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:09.995983 containerd[1458]: time="2026-01-24T00:30:09.995948518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-200-204,Uid:9a86f0754cde934c7e9496786fddfaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6a3c0a4b5b98e258dd2fd9039fca426dddf6d0d0bcdeef411c795ec60a3154b\"" Jan 24 00:30:09.996854 containerd[1458]: time="2026-01-24T00:30:09.996807858Z" level=info msg="CreateContainer within sandbox \"02dfd4c6cd1c7c5dd2e8aeac509efd3b4f3d8bce1240e95f2d0ee2fc2f0eeadc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:30:09.997042 kubelet[2155]: E0124 00:30:09.996987 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:10.019689 containerd[1458]: time="2026-01-24T00:30:10.019653958Z" level=info msg="CreateContainer within sandbox \"b6a3c0a4b5b98e258dd2fd9039fca426dddf6d0d0bcdeef411c795ec60a3154b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:30:10.021153 systemd[1]: Started cri-containerd-2a486e1f46a1d116ce5afd6d7760ff153768871029852341279f4100e7e4001c.scope - libcontainer container 2a486e1f46a1d116ce5afd6d7760ff153768871029852341279f4100e7e4001c. Jan 24 00:30:10.024576 containerd[1458]: time="2026-01-24T00:30:10.023691778Z" level=info msg="CreateContainer within sandbox \"02dfd4c6cd1c7c5dd2e8aeac509efd3b4f3d8bce1240e95f2d0ee2fc2f0eeadc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7d6d4d1071345373f8a5df16bcc2041deb3efa18978655d883963f506585d1c\"" Jan 24 00:30:10.024576 containerd[1458]: time="2026-01-24T00:30:10.024520558Z" level=info msg="StartContainer for \"d7d6d4d1071345373f8a5df16bcc2041deb3efa18978655d883963f506585d1c\"" Jan 24 00:30:10.034672 containerd[1458]: time="2026-01-24T00:30:10.034248488Z" level=info msg="CreateContainer within sandbox \"b6a3c0a4b5b98e258dd2fd9039fca426dddf6d0d0bcdeef411c795ec60a3154b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1532b6680c042adb91150948872b9c7f5ab43c820024ffd08e6548938d000f4\"" Jan 24 00:30:10.034672 containerd[1458]: time="2026-01-24T00:30:10.034590408Z" level=info msg="StartContainer for \"e1532b6680c042adb91150948872b9c7f5ab43c820024ffd08e6548938d000f4\"" Jan 24 00:30:10.060240 systemd[1]: Started cri-containerd-d7d6d4d1071345373f8a5df16bcc2041deb3efa18978655d883963f506585d1c.scope - libcontainer container d7d6d4d1071345373f8a5df16bcc2041deb3efa18978655d883963f506585d1c. Jan 24 00:30:10.081613 containerd[1458]: time="2026-01-24T00:30:10.080982098Z" level=info msg="StartContainer for \"2a486e1f46a1d116ce5afd6d7760ff153768871029852341279f4100e7e4001c\" returns successfully" Jan 24 00:30:10.098110 systemd[1]: Started cri-containerd-e1532b6680c042adb91150948872b9c7f5ab43c820024ffd08e6548938d000f4.scope - libcontainer container e1532b6680c042adb91150948872b9c7f5ab43c820024ffd08e6548938d000f4. Jan 24 00:30:10.135805 containerd[1458]: time="2026-01-24T00:30:10.135712088Z" level=info msg="StartContainer for \"d7d6d4d1071345373f8a5df16bcc2041deb3efa18978655d883963f506585d1c\" returns successfully" Jan 24 00:30:10.164270 containerd[1458]: time="2026-01-24T00:30:10.164154468Z" level=info msg="StartContainer for \"e1532b6680c042adb91150948872b9c7f5ab43c820024ffd08e6548938d000f4\" returns successfully" Jan 24 00:30:10.221232 kubelet[2155]: E0124 00:30:10.221190 2155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.200.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-200-204?timeout=10s\": dial tcp 172.234.200.204:6443: connect: connection refused" interval="1.6s" Jan 24 00:30:10.225469 kubelet[2155]: E0124 00:30:10.225437 2155 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.200.204:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-200-204&limit=500&resourceVersion=0\": dial tcp 172.234.200.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:30:10.399973 kubelet[2155]: I0124 00:30:10.399181 2155 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-204" Jan 24 00:30:10.860239 kubelet[2155]: E0124 00:30:10.859939 2155 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-204\" not found" node="172-234-200-204" Jan 24 00:30:10.860239 kubelet[2155]: E0124 00:30:10.860087 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:10.863451 kubelet[2155]: E0124 00:30:10.863057 2155 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-204\" not found" node="172-234-200-204" Jan 24 00:30:10.863451 kubelet[2155]: E0124 00:30:10.863156 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:10.866548 kubelet[2155]: E0124 00:30:10.866349 2155 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-204\" not found" node="172-234-200-204" Jan 24 00:30:10.866548 kubelet[2155]: E0124 00:30:10.866440 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:11.747575 kubelet[2155]: I0124 00:30:11.747531 2155 kubelet_node_status.go:78] "Successfully registered node" node="172-234-200-204" Jan 24 00:30:11.747575 kubelet[2155]: E0124 00:30:11.747574 2155 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-234-200-204\": node \"172-234-200-204\" not found" Jan 24 00:30:11.784133 kubelet[2155]: E0124 00:30:11.784108 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:11.870066 kubelet[2155]: E0124 00:30:11.870007 2155 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-204\" not found" node="172-234-200-204" Jan 24 00:30:11.872178 kubelet[2155]: E0124 00:30:11.872144 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:11.872269 kubelet[2155]: E0124 00:30:11.870530 2155 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-200-204\" not found" node="172-234-200-204" Jan 24 00:30:11.872316 kubelet[2155]: E0124 00:30:11.872291 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:11.884616 kubelet[2155]: E0124 00:30:11.884576 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:11.985388 kubelet[2155]: E0124 00:30:11.985338 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:12.086454 kubelet[2155]: E0124 00:30:12.086415 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:12.186952 kubelet[2155]: E0124 00:30:12.186902 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:12.287603 kubelet[2155]: E0124 00:30:12.287577 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:12.388470 kubelet[2155]: E0124 00:30:12.388310 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:12.489111 kubelet[2155]: E0124 00:30:12.489065 2155 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:12.516502 kubelet[2155]: I0124 00:30:12.516384 2155 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:12.520513 kubelet[2155]: E0124 00:30:12.520474 2155 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-200-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:12.520513 kubelet[2155]: I0124 00:30:12.520499 2155 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:12.521660 kubelet[2155]: E0124 00:30:12.521636 2155 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-200-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:12.521660 kubelet[2155]: I0124 00:30:12.521653 2155 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-200-204" Jan 24 00:30:12.522644 kubelet[2155]: E0124 00:30:12.522624 2155 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-200-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-200-204" Jan 24 00:30:12.797507 kubelet[2155]: I0124 00:30:12.797397 2155 apiserver.go:52] "Watching apiserver" Jan 24 00:30:12.816185 kubelet[2155]: I0124 00:30:12.816141 2155 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:30:12.869565 kubelet[2155]: I0124 00:30:12.869524 2155 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:12.874256 kubelet[2155]: E0124 00:30:12.874227 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:13.744168 kubelet[2155]: I0124 00:30:13.744136 2155 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-200-204" Jan 24 00:30:13.749291 kubelet[2155]: E0124 00:30:13.749265 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:13.851363 systemd[1]: Reloading requested from client PID 2444 ('systemctl') (unit session-7.scope)... Jan 24 00:30:13.851390 systemd[1]: Reloading... Jan 24 00:30:13.872939 kubelet[2155]: E0124 00:30:13.872391 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:13.872939 kubelet[2155]: E0124 00:30:13.872751 2155 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:13.992044 zram_generator::config[2487]: No configuration found. Jan 24 00:30:14.164939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:30:14.307860 systemd[1]: Reloading finished in 455 ms. Jan 24 00:30:14.374381 kubelet[2155]: I0124 00:30:14.373893 2155 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:30:14.373933 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:30:14.398828 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:30:14.399262 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:30:14.411269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:30:14.632179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:30:14.642945 (kubelet)[2533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:30:14.718772 kubelet[2533]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:30:14.718772 kubelet[2533]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:30:14.720248 kubelet[2533]: I0124 00:30:14.719133 2533 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:30:14.728799 kubelet[2533]: I0124 00:30:14.728760 2533 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:30:14.728799 kubelet[2533]: I0124 00:30:14.728785 2533 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:30:14.728921 kubelet[2533]: I0124 00:30:14.728813 2533 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:30:14.728921 kubelet[2533]: I0124 00:30:14.728820 2533 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:30:14.729058 kubelet[2533]: I0124 00:30:14.729034 2533 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:30:14.730165 kubelet[2533]: I0124 00:30:14.730140 2533 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:30:14.732296 kubelet[2533]: I0124 00:30:14.732123 2533 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:30:14.740968 kubelet[2533]: E0124 00:30:14.740927 2533 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:30:14.742044 kubelet[2533]: I0124 00:30:14.741161 2533 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:30:14.746879 kubelet[2533]: I0124 00:30:14.746856 2533 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:30:14.747348 kubelet[2533]: I0124 00:30:14.747283 2533 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:30:14.747709 kubelet[2533]: I0124 00:30:14.747480 2533 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-200-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:30:14.747984 kubelet[2533]: I0124 00:30:14.747936 2533 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:30:14.748105 kubelet[2533]: I0124 00:30:14.748089 2533 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:30:14.748205 kubelet[2533]: I0124 00:30:14.748190 2533 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:30:14.749513 kubelet[2533]: I0124 00:30:14.749494 2533 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:30:14.749872 kubelet[2533]: I0124 00:30:14.749855 2533 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:30:14.749954 kubelet[2533]: I0124 00:30:14.749939 2533 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:30:14.750080 kubelet[2533]: I0124 00:30:14.750065 2533 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:30:14.750181 kubelet[2533]: I0124 00:30:14.750158 2533 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:30:14.755273 kubelet[2533]: I0124 00:30:14.755228 2533 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:30:14.757089 kubelet[2533]: I0124 00:30:14.757064 2533 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:30:14.758034 kubelet[2533]: I0124 00:30:14.757234 2533 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:30:14.763167 kubelet[2533]: I0124 00:30:14.763087 2533 server.go:1262] "Started kubelet" Jan 24 00:30:14.765906 kubelet[2533]: I0124 00:30:14.765862 2533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:30:14.772107 kubelet[2533]: I0124 00:30:14.771961 2533 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:30:14.775582 kubelet[2533]: I0124 00:30:14.775529 2533 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:30:14.775786 kubelet[2533]: E0124 00:30:14.775757 2533 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-200-204\" not found" Jan 24 00:30:14.775993 kubelet[2533]: I0124 00:30:14.775969 2533 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:30:14.776156 kubelet[2533]: I0124 00:30:14.776129 2533 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:30:14.777628 kubelet[2533]: I0124 00:30:14.777606 2533 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:30:14.781084 kubelet[2533]: I0124 00:30:14.781038 2533 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:30:14.781145 kubelet[2533]: I0124 00:30:14.781089 2533 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:30:14.782214 kubelet[2533]: I0124 00:30:14.782180 2533 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:30:14.783147 kubelet[2533]: I0124 00:30:14.783105 2533 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:30:14.790927 kubelet[2533]: I0124 00:30:14.790627 2533 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:30:14.790927 kubelet[2533]: I0124 00:30:14.790766 2533 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:30:14.795781 kubelet[2533]: E0124 00:30:14.795703 2533 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:30:14.797563 kubelet[2533]: I0124 00:30:14.796313 2533 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:30:14.811476 kubelet[2533]: I0124 00:30:14.811420 2533 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:30:14.812804 kubelet[2533]: I0124 00:30:14.812773 2533 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:30:14.812804 kubelet[2533]: I0124 00:30:14.812795 2533 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:30:14.812910 kubelet[2533]: I0124 00:30:14.812816 2533 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:30:14.812910 kubelet[2533]: E0124 00:30:14.812880 2533 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:30:14.860601 kubelet[2533]: I0124 00:30:14.860414 2533 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:30:14.861270 kubelet[2533]: I0124 00:30:14.860741 2533 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:30:14.861270 kubelet[2533]: I0124 00:30:14.860765 2533 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:30:14.861270 kubelet[2533]: I0124 00:30:14.860880 2533 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:30:14.861270 kubelet[2533]: I0124 00:30:14.860890 2533 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:30:14.861270 kubelet[2533]: I0124 00:30:14.860906 2533 policy_none.go:49] "None policy: Start" Jan 24 00:30:14.861270 kubelet[2533]: I0124 00:30:14.860916 2533 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:30:14.861270 kubelet[2533]: I0124 00:30:14.860926 2533 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:30:14.861844 kubelet[2533]: I0124 00:30:14.861717 2533 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 24 00:30:14.861844 kubelet[2533]: I0124 00:30:14.861732 2533 policy_none.go:47] "Start" Jan 24 00:30:14.868199 kubelet[2533]: E0124 00:30:14.868183 2533 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:30:14.868832 kubelet[2533]: I0124 00:30:14.868821 2533 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:30:14.869836 kubelet[2533]: I0124 00:30:14.869807 2533 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:30:14.870156 kubelet[2533]: I0124 00:30:14.870142 2533 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:30:14.874693 kubelet[2533]: E0124 00:30:14.874646 2533 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:30:14.917107 kubelet[2533]: I0124 00:30:14.914479 2533 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-200-204" Jan 24 00:30:14.917107 kubelet[2533]: I0124 00:30:14.914876 2533 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:14.917107 kubelet[2533]: I0124 00:30:14.915169 2533 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:14.920431 kubelet[2533]: E0124 00:30:14.920412 2533 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-200-204\" already exists" pod="kube-system/kube-scheduler-172-234-200-204" Jan 24 00:30:14.921771 kubelet[2533]: E0124 00:30:14.921605 2533 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-200-204\" already exists" pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:14.975043 kubelet[2533]: I0124 00:30:14.973080 2533 kubelet_node_status.go:75] "Attempting to register node" node="172-234-200-204" Jan 24 00:30:14.979438 kubelet[2533]: I0124 00:30:14.979333 2533 kubelet_node_status.go:124] "Node was previously registered" node="172-234-200-204" Jan 24 00:30:14.980117 kubelet[2533]: I0124 00:30:14.979678 2533 kubelet_node_status.go:78] "Successfully registered node" node="172-234-200-204" Jan 24 00:30:15.077207 kubelet[2533]: I0124 00:30:15.077162 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14fda534f09383c177705513a50df241-kubeconfig\") pod \"kube-scheduler-172-234-200-204\" (UID: \"14fda534f09383c177705513a50df241\") " pod="kube-system/kube-scheduler-172-234-200-204" Jan 24 00:30:15.077724 kubelet[2533]: I0124 00:30:15.077513 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a86f0754cde934c7e9496786fddfaca-ca-certs\") pod \"kube-apiserver-172-234-200-204\" (UID: \"9a86f0754cde934c7e9496786fddfaca\") " pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:15.077724 kubelet[2533]: I0124 00:30:15.077537 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a86f0754cde934c7e9496786fddfaca-k8s-certs\") pod \"kube-apiserver-172-234-200-204\" (UID: \"9a86f0754cde934c7e9496786fddfaca\") " pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:15.077724 kubelet[2533]: I0124 00:30:15.077588 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a86f0754cde934c7e9496786fddfaca-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-200-204\" (UID: \"9a86f0754cde934c7e9496786fddfaca\") " pod="kube-system/kube-apiserver-172-234-200-204" Jan 24 00:30:15.077724 kubelet[2533]: I0124 00:30:15.077607 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-ca-certs\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:15.077724 kubelet[2533]: I0124 00:30:15.077622 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-flexvolume-dir\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:15.077980 kubelet[2533]: I0124 00:30:15.077676 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-k8s-certs\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:15.077980 kubelet[2533]: I0124 00:30:15.077959 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-kubeconfig\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:15.078160 kubelet[2533]: I0124 00:30:15.078019 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59c06506a948a1755a6144cbdb998c3e-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-200-204\" (UID: \"59c06506a948a1755a6144cbdb998c3e\") " pod="kube-system/kube-controller-manager-172-234-200-204" Jan 24 00:30:15.222090 kubelet[2533]: E0124 00:30:15.221348 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:15.222752 kubelet[2533]: E0124 00:30:15.222736 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:15.224367 kubelet[2533]: E0124 00:30:15.224186 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:15.751266 kubelet[2533]: I0124 00:30:15.751234 2533 apiserver.go:52] "Watching apiserver" Jan 24 00:30:15.776401 kubelet[2533]: I0124 00:30:15.776377 2533 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:30:15.838704 kubelet[2533]: E0124 00:30:15.838679 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:15.839300 kubelet[2533]: E0124 00:30:15.839267 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:15.839492 kubelet[2533]: E0124 00:30:15.839474 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:15.853947 kubelet[2533]: I0124 00:30:15.853910 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-200-204" podStartSLOduration=2.853899078 podStartE2EDuration="2.853899078s" podCreationTimestamp="2026-01-24 00:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:15.853745998 +0000 UTC m=+1.202496421" watchObservedRunningTime="2026-01-24 00:30:15.853899078 +0000 UTC m=+1.202649501" Jan 24 00:30:15.865264 kubelet[2533]: I0124 00:30:15.865187 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-200-204" podStartSLOduration=3.865177128 podStartE2EDuration="3.865177128s" podCreationTimestamp="2026-01-24 00:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:15.859785558 +0000 UTC m=+1.208535981" watchObservedRunningTime="2026-01-24 00:30:15.865177128 +0000 UTC m=+1.213927551" Jan 24 00:30:16.839584 kubelet[2533]: E0124 00:30:16.839544 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:16.840021 kubelet[2533]: E0124 00:30:16.839985 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:17.768623 kubelet[2533]: E0124 00:30:17.768576 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:17.877372 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:30:20.122363 kubelet[2533]: I0124 00:30:20.122310 2533 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:30:20.123432 containerd[1458]: time="2026-01-24T00:30:20.123256188Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:30:20.124416 kubelet[2533]: I0124 00:30:20.123501 2533 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:30:21.277217 kubelet[2533]: I0124 00:30:21.276097 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-200-204" podStartSLOduration=7.27605143 podStartE2EDuration="7.27605143s" podCreationTimestamp="2026-01-24 00:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:15.865388548 +0000 UTC m=+1.214138991" watchObservedRunningTime="2026-01-24 00:30:21.27605143 +0000 UTC m=+6.624801853" Jan 24 00:30:21.290724 systemd[1]: Created slice kubepods-besteffort-pod18d05c36_2547_42f5_b342_fbce0a7e6a16.slice - libcontainer container kubepods-besteffort-pod18d05c36_2547_42f5_b342_fbce0a7e6a16.slice. Jan 24 00:30:21.314988 kubelet[2533]: I0124 00:30:21.314902 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/18d05c36-2547-42f5-b342-fbce0a7e6a16-kube-proxy\") pod \"kube-proxy-jfd9r\" (UID: \"18d05c36-2547-42f5-b342-fbce0a7e6a16\") " pod="kube-system/kube-proxy-jfd9r" Jan 24 00:30:21.314988 kubelet[2533]: I0124 00:30:21.314956 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18d05c36-2547-42f5-b342-fbce0a7e6a16-xtables-lock\") pod \"kube-proxy-jfd9r\" (UID: \"18d05c36-2547-42f5-b342-fbce0a7e6a16\") " pod="kube-system/kube-proxy-jfd9r" Jan 24 00:30:21.315243 kubelet[2533]: I0124 00:30:21.315031 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18d05c36-2547-42f5-b342-fbce0a7e6a16-lib-modules\") pod \"kube-proxy-jfd9r\" (UID: \"18d05c36-2547-42f5-b342-fbce0a7e6a16\") " pod="kube-system/kube-proxy-jfd9r" Jan 24 00:30:21.315243 kubelet[2533]: I0124 00:30:21.315049 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc5kz\" (UniqueName: \"kubernetes.io/projected/18d05c36-2547-42f5-b342-fbce0a7e6a16-kube-api-access-hc5kz\") pod \"kube-proxy-jfd9r\" (UID: \"18d05c36-2547-42f5-b342-fbce0a7e6a16\") " pod="kube-system/kube-proxy-jfd9r" Jan 24 00:30:21.461973 systemd[1]: Created slice kubepods-besteffort-podcb6b1793_e9f1_4e0f_8a8c_fc1a005096f5.slice - libcontainer container kubepods-besteffort-podcb6b1793_e9f1_4e0f_8a8c_fc1a005096f5.slice. Jan 24 00:30:21.516865 kubelet[2533]: I0124 00:30:21.516789 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdh7\" (UniqueName: \"kubernetes.io/projected/cb6b1793-e9f1-4e0f-8a8c-fc1a005096f5-kube-api-access-hbdh7\") pod \"tigera-operator-65cdcdfd6d-p56qj\" (UID: \"cb6b1793-e9f1-4e0f-8a8c-fc1a005096f5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-p56qj" Jan 24 00:30:21.516865 kubelet[2533]: I0124 00:30:21.516824 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cb6b1793-e9f1-4e0f-8a8c-fc1a005096f5-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-p56qj\" (UID: \"cb6b1793-e9f1-4e0f-8a8c-fc1a005096f5\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-p56qj" Jan 24 00:30:21.603562 kubelet[2533]: E0124 00:30:21.601571 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:21.604469 containerd[1458]: time="2026-01-24T00:30:21.604399735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jfd9r,Uid:18d05c36-2547-42f5-b342-fbce0a7e6a16,Namespace:kube-system,Attempt:0,}" Jan 24 00:30:21.633037 containerd[1458]: time="2026-01-24T00:30:21.629808979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:21.634115 containerd[1458]: time="2026-01-24T00:30:21.632075292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:21.634567 containerd[1458]: time="2026-01-24T00:30:21.634298425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:21.634567 containerd[1458]: time="2026-01-24T00:30:21.634463627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:21.666283 systemd[1]: Started cri-containerd-612f9c5ba0aba3b54f44772f10f7fe1e43ed7af5ed3de2212fd9ca8aefc499ca.scope - libcontainer container 612f9c5ba0aba3b54f44772f10f7fe1e43ed7af5ed3de2212fd9ca8aefc499ca. Jan 24 00:30:21.701642 containerd[1458]: time="2026-01-24T00:30:21.701555383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jfd9r,Uid:18d05c36-2547-42f5-b342-fbce0a7e6a16,Namespace:kube-system,Attempt:0,} returns sandbox id \"612f9c5ba0aba3b54f44772f10f7fe1e43ed7af5ed3de2212fd9ca8aefc499ca\"" Jan 24 00:30:21.703073 kubelet[2533]: E0124 00:30:21.702851 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:21.708295 containerd[1458]: time="2026-01-24T00:30:21.708084471Z" level=info msg="CreateContainer within sandbox \"612f9c5ba0aba3b54f44772f10f7fe1e43ed7af5ed3de2212fd9ca8aefc499ca\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:30:21.719066 containerd[1458]: time="2026-01-24T00:30:21.719036774Z" level=info msg="CreateContainer within sandbox \"612f9c5ba0aba3b54f44772f10f7fe1e43ed7af5ed3de2212fd9ca8aefc499ca\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00e55c5da770b7880ad674934741dffb52e4811c7e01fcf714e1de4f50282446\"" Jan 24 00:30:21.721168 containerd[1458]: time="2026-01-24T00:30:21.721143216Z" level=info msg="StartContainer for \"00e55c5da770b7880ad674934741dffb52e4811c7e01fcf714e1de4f50282446\"" Jan 24 00:30:21.754191 systemd[1]: Started cri-containerd-00e55c5da770b7880ad674934741dffb52e4811c7e01fcf714e1de4f50282446.scope - libcontainer container 00e55c5da770b7880ad674934741dffb52e4811c7e01fcf714e1de4f50282446. Jan 24 00:30:21.768491 containerd[1458]: time="2026-01-24T00:30:21.768434937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-p56qj,Uid:cb6b1793-e9f1-4e0f-8a8c-fc1a005096f5,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:30:21.794103 containerd[1458]: time="2026-01-24T00:30:21.793696879Z" level=info msg="StartContainer for \"00e55c5da770b7880ad674934741dffb52e4811c7e01fcf714e1de4f50282446\" returns successfully" Jan 24 00:30:21.811137 containerd[1458]: time="2026-01-24T00:30:21.810824376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:21.811137 containerd[1458]: time="2026-01-24T00:30:21.810870577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:21.811137 containerd[1458]: time="2026-01-24T00:30:21.810896077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:21.811137 containerd[1458]: time="2026-01-24T00:30:21.810979078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:21.831346 systemd[1]: Started cri-containerd-dcd85904601d828d59d88d1e64684bb99cc1633867ea2c8ebf36878d26962a03.scope - libcontainer container dcd85904601d828d59d88d1e64684bb99cc1633867ea2c8ebf36878d26962a03. Jan 24 00:30:21.852214 kubelet[2533]: E0124 00:30:21.852169 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:21.867434 kubelet[2533]: I0124 00:30:21.867290 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jfd9r" podStartSLOduration=0.867268632 podStartE2EDuration="867.268632ms" podCreationTimestamp="2026-01-24 00:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:21.866621245 +0000 UTC m=+7.215371678" watchObservedRunningTime="2026-01-24 00:30:21.867268632 +0000 UTC m=+7.216019065" Jan 24 00:30:21.909766 containerd[1458]: time="2026-01-24T00:30:21.908939234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-p56qj,Uid:cb6b1793-e9f1-4e0f-8a8c-fc1a005096f5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dcd85904601d828d59d88d1e64684bb99cc1633867ea2c8ebf36878d26962a03\"" Jan 24 00:30:21.914228 containerd[1458]: time="2026-01-24T00:30:21.914131578Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:30:22.241904 kubelet[2533]: E0124 00:30:22.239246 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:22.682134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709447714.mount: Deactivated successfully. Jan 24 00:30:22.853056 kubelet[2533]: E0124 00:30:22.852238 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:23.090147 containerd[1458]: time="2026-01-24T00:30:23.090091594Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:23.090954 containerd[1458]: time="2026-01-24T00:30:23.090843671Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:30:23.091625 containerd[1458]: time="2026-01-24T00:30:23.091450416Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:23.093146 containerd[1458]: time="2026-01-24T00:30:23.093125212Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:23.094849 containerd[1458]: time="2026-01-24T00:30:23.094826457Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.180589118s" Jan 24 00:30:23.094929 containerd[1458]: time="2026-01-24T00:30:23.094913978Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:30:23.100471 containerd[1458]: time="2026-01-24T00:30:23.100415948Z" level=info msg="CreateContainer within sandbox \"dcd85904601d828d59d88d1e64684bb99cc1633867ea2c8ebf36878d26962a03\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:30:23.107649 containerd[1458]: time="2026-01-24T00:30:23.107615574Z" level=info msg="CreateContainer within sandbox \"dcd85904601d828d59d88d1e64684bb99cc1633867ea2c8ebf36878d26962a03\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8674efe91de4edcff8b10561255f7de79e650c5922bd181e8ede240327b697af\"" Jan 24 00:30:23.108248 containerd[1458]: time="2026-01-24T00:30:23.108231649Z" level=info msg="StartContainer for \"8674efe91de4edcff8b10561255f7de79e650c5922bd181e8ede240327b697af\"" Jan 24 00:30:23.134158 systemd[1]: Started cri-containerd-8674efe91de4edcff8b10561255f7de79e650c5922bd181e8ede240327b697af.scope - libcontainer container 8674efe91de4edcff8b10561255f7de79e650c5922bd181e8ede240327b697af. Jan 24 00:30:23.158572 containerd[1458]: time="2026-01-24T00:30:23.158383246Z" level=info msg="StartContainer for \"8674efe91de4edcff8b10561255f7de79e650c5922bd181e8ede240327b697af\" returns successfully" Jan 24 00:30:23.857031 kubelet[2533]: E0124 00:30:23.856447 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:23.864952 kubelet[2533]: I0124 00:30:23.864915 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-p56qj" podStartSLOduration=1.6824409 podStartE2EDuration="2.864902397s" podCreationTimestamp="2026-01-24 00:30:21 +0000 UTC" firstStartedPulling="2026-01-24 00:30:21.913306139 +0000 UTC m=+7.262056572" lastFinishedPulling="2026-01-24 00:30:23.095767646 +0000 UTC m=+8.444518069" observedRunningTime="2026-01-24 00:30:23.864466143 +0000 UTC m=+9.213216586" watchObservedRunningTime="2026-01-24 00:30:23.864902397 +0000 UTC m=+9.213652820" Jan 24 00:30:24.517318 kubelet[2533]: E0124 00:30:24.517275 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:24.862504 kubelet[2533]: E0124 00:30:24.861783 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:25.865054 kubelet[2533]: E0124 00:30:25.864739 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:27.773189 kubelet[2533]: E0124 00:30:27.773143 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:28.712029 sudo[1680]: pam_unix(sudo:session): session closed for user root Jan 24 00:30:28.735357 sshd[1677]: pam_unix(sshd:session): session closed for user core Jan 24 00:30:28.744948 systemd[1]: sshd@6-172.234.200.204:22-68.220.241.50:49948.service: Deactivated successfully. Jan 24 00:30:28.752398 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:30:28.752672 systemd[1]: session-7.scope: Consumed 4.879s CPU time, 159.0M memory peak, 0B memory swap peak. Jan 24 00:30:28.753813 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:30:28.755482 systemd-logind[1448]: Removed session 7. Jan 24 00:30:32.051086 update_engine[1449]: I20260124 00:30:32.051038 1449 update_attempter.cc:509] Updating boot flags... Jan 24 00:30:32.115043 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2942) Jan 24 00:30:32.214044 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2938) Jan 24 00:30:33.293063 systemd[1]: Created slice kubepods-besteffort-podd00bd785_e853_4308_9119_c8f1efd7b82d.slice - libcontainer container kubepods-besteffort-podd00bd785_e853_4308_9119_c8f1efd7b82d.slice. Jan 24 00:30:33.387829 kubelet[2533]: I0124 00:30:33.387572 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d00bd785-e853-4308-9119-c8f1efd7b82d-tigera-ca-bundle\") pod \"calico-typha-7b56945b9-2cmh5\" (UID: \"d00bd785-e853-4308-9119-c8f1efd7b82d\") " pod="calico-system/calico-typha-7b56945b9-2cmh5" Jan 24 00:30:33.387829 kubelet[2533]: I0124 00:30:33.387625 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv644\" (UniqueName: \"kubernetes.io/projected/d00bd785-e853-4308-9119-c8f1efd7b82d-kube-api-access-kv644\") pod \"calico-typha-7b56945b9-2cmh5\" (UID: \"d00bd785-e853-4308-9119-c8f1efd7b82d\") " pod="calico-system/calico-typha-7b56945b9-2cmh5" Jan 24 00:30:33.387829 kubelet[2533]: I0124 00:30:33.387728 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d00bd785-e853-4308-9119-c8f1efd7b82d-typha-certs\") pod \"calico-typha-7b56945b9-2cmh5\" (UID: \"d00bd785-e853-4308-9119-c8f1efd7b82d\") " pod="calico-system/calico-typha-7b56945b9-2cmh5" Jan 24 00:30:33.474565 systemd[1]: Created slice kubepods-besteffort-pod558b022e_3cf4_460a_a08c_cac81b743b69.slice - libcontainer container kubepods-besteffort-pod558b022e_3cf4_460a_a08c_cac81b743b69.slice. Jan 24 00:30:33.488610 kubelet[2533]: I0124 00:30:33.488553 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/558b022e-3cf4-460a-a08c-cac81b743b69-var-run-calico\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.488610 kubelet[2533]: I0124 00:30:33.488603 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/558b022e-3cf4-460a-a08c-cac81b743b69-flexvol-driver-host\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.488610 kubelet[2533]: I0124 00:30:33.488621 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/558b022e-3cf4-460a-a08c-cac81b743b69-xtables-lock\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.488921 kubelet[2533]: I0124 00:30:33.488636 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/558b022e-3cf4-460a-a08c-cac81b743b69-policysync\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.488921 kubelet[2533]: I0124 00:30:33.488651 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4pww\" (UniqueName: \"kubernetes.io/projected/558b022e-3cf4-460a-a08c-cac81b743b69-kube-api-access-r4pww\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.488921 kubelet[2533]: I0124 00:30:33.488680 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/558b022e-3cf4-460a-a08c-cac81b743b69-cni-log-dir\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.488921 kubelet[2533]: I0124 00:30:33.488693 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/558b022e-3cf4-460a-a08c-cac81b743b69-cni-net-dir\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.488921 kubelet[2533]: I0124 00:30:33.488705 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/558b022e-3cf4-460a-a08c-cac81b743b69-lib-modules\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.489114 kubelet[2533]: I0124 00:30:33.488718 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/558b022e-3cf4-460a-a08c-cac81b743b69-node-certs\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.489114 kubelet[2533]: I0124 00:30:33.488730 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/558b022e-3cf4-460a-a08c-cac81b743b69-tigera-ca-bundle\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.489114 kubelet[2533]: I0124 00:30:33.488743 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/558b022e-3cf4-460a-a08c-cac81b743b69-var-lib-calico\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.489114 kubelet[2533]: I0124 00:30:33.488759 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/558b022e-3cf4-460a-a08c-cac81b743b69-cni-bin-dir\") pod \"calico-node-8np4s\" (UID: \"558b022e-3cf4-460a-a08c-cac81b743b69\") " pod="calico-system/calico-node-8np4s" Jan 24 00:30:33.594892 kubelet[2533]: E0124 00:30:33.594807 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.594892 kubelet[2533]: W0124 00:30:33.594833 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.595298 kubelet[2533]: E0124 00:30:33.595135 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.600923 kubelet[2533]: E0124 00:30:33.600273 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.600923 kubelet[2533]: W0124 00:30:33.600526 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.600923 kubelet[2533]: E0124 00:30:33.600544 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.603152 kubelet[2533]: E0124 00:30:33.602025 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:33.605230 containerd[1458]: time="2026-01-24T00:30:33.605010572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b56945b9-2cmh5,Uid:d00bd785-e853-4308-9119-c8f1efd7b82d,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:33.615496 kubelet[2533]: E0124 00:30:33.615436 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.615496 kubelet[2533]: W0124 00:30:33.615456 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.615697 kubelet[2533]: E0124 00:30:33.615583 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.642988 containerd[1458]: time="2026-01-24T00:30:33.642660752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:33.642988 containerd[1458]: time="2026-01-24T00:30:33.642714193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:33.642988 containerd[1458]: time="2026-01-24T00:30:33.642727903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:33.643513 containerd[1458]: time="2026-01-24T00:30:33.642909363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:33.672131 systemd[1]: Started cri-containerd-bb44944d90f721f950603472cf8d36a6535ce6da454dd03b1e9338a44bd52454.scope - libcontainer container bb44944d90f721f950603472cf8d36a6535ce6da454dd03b1e9338a44bd52454. Jan 24 00:30:33.696685 kubelet[2533]: E0124 00:30:33.696300 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:30:33.742516 containerd[1458]: time="2026-01-24T00:30:33.742441059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b56945b9-2cmh5,Uid:d00bd785-e853-4308-9119-c8f1efd7b82d,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb44944d90f721f950603472cf8d36a6535ce6da454dd03b1e9338a44bd52454\"" Jan 24 00:30:33.743869 kubelet[2533]: E0124 00:30:33.743841 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:33.745893 containerd[1458]: time="2026-01-24T00:30:33.745752855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:30:33.781033 kubelet[2533]: E0124 00:30:33.780781 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:33.783501 containerd[1458]: time="2026-01-24T00:30:33.782061369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8np4s,Uid:558b022e-3cf4-460a-a08c-cac81b743b69,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:33.786870 kubelet[2533]: E0124 00:30:33.786834 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.788947 kubelet[2533]: W0124 00:30:33.788891 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.789096 kubelet[2533]: E0124 00:30:33.789082 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.789571 kubelet[2533]: E0124 00:30:33.789546 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.789855 kubelet[2533]: W0124 00:30:33.789838 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.790414 kubelet[2533]: E0124 00:30:33.790334 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.791678 kubelet[2533]: E0124 00:30:33.791663 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.791828 kubelet[2533]: W0124 00:30:33.791810 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.792048 kubelet[2533]: E0124 00:30:33.791956 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.794657 kubelet[2533]: E0124 00:30:33.794307 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.794657 kubelet[2533]: W0124 00:30:33.794320 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.794657 kubelet[2533]: E0124 00:30:33.794335 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.794975 kubelet[2533]: E0124 00:30:33.794963 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.795072 kubelet[2533]: W0124 00:30:33.795059 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.796732 kubelet[2533]: E0124 00:30:33.796052 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.797075 kubelet[2533]: E0124 00:30:33.797049 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.797292 kubelet[2533]: W0124 00:30:33.797272 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.797408 kubelet[2533]: E0124 00:30:33.797396 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.799439 kubelet[2533]: E0124 00:30:33.799337 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.799439 kubelet[2533]: W0124 00:30:33.799349 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.799439 kubelet[2533]: E0124 00:30:33.799361 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.800329 kubelet[2533]: E0124 00:30:33.800177 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.800329 kubelet[2533]: W0124 00:30:33.800210 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.800329 kubelet[2533]: E0124 00:30:33.800223 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.802730 kubelet[2533]: E0124 00:30:33.802647 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.802730 kubelet[2533]: W0124 00:30:33.802711 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.803145 kubelet[2533]: E0124 00:30:33.802747 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.803177 kubelet[2533]: E0124 00:30:33.803160 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.803177 kubelet[2533]: W0124 00:30:33.803173 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.803373 kubelet[2533]: E0124 00:30:33.803190 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.804433 kubelet[2533]: E0124 00:30:33.804293 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.804433 kubelet[2533]: W0124 00:30:33.804320 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.804433 kubelet[2533]: E0124 00:30:33.804336 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.804773 kubelet[2533]: E0124 00:30:33.804639 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.804773 kubelet[2533]: W0124 00:30:33.804658 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.804773 kubelet[2533]: E0124 00:30:33.804671 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.807363 kubelet[2533]: E0124 00:30:33.807243 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.807363 kubelet[2533]: W0124 00:30:33.807265 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.807363 kubelet[2533]: E0124 00:30:33.807280 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.807626 kubelet[2533]: E0124 00:30:33.807561 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.807626 kubelet[2533]: W0124 00:30:33.807579 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.807626 kubelet[2533]: E0124 00:30:33.807592 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.808065 kubelet[2533]: E0124 00:30:33.807873 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.808065 kubelet[2533]: W0124 00:30:33.807891 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.808065 kubelet[2533]: E0124 00:30:33.807905 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.808255 kubelet[2533]: E0124 00:30:33.808230 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.808255 kubelet[2533]: W0124 00:30:33.808251 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.808593 kubelet[2533]: E0124 00:30:33.808262 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.808593 kubelet[2533]: E0124 00:30:33.808577 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.808593 kubelet[2533]: W0124 00:30:33.808588 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.808653 kubelet[2533]: E0124 00:30:33.808600 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.809116 kubelet[2533]: E0124 00:30:33.808865 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.809116 kubelet[2533]: W0124 00:30:33.808888 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.809116 kubelet[2533]: E0124 00:30:33.808899 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.809710 kubelet[2533]: E0124 00:30:33.809582 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.809710 kubelet[2533]: W0124 00:30:33.809601 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.809710 kubelet[2533]: E0124 00:30:33.809615 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.810922 kubelet[2533]: E0124 00:30:33.810098 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.810922 kubelet[2533]: W0124 00:30:33.810115 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.810922 kubelet[2533]: E0124 00:30:33.810125 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.811116 kubelet[2533]: E0124 00:30:33.810970 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.811116 kubelet[2533]: W0124 00:30:33.810981 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.811116 kubelet[2533]: E0124 00:30:33.810990 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.812277 kubelet[2533]: I0124 00:30:33.812115 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/974bf216-052b-49fa-b0ab-b6a46ee1fdcb-kubelet-dir\") pod \"csi-node-driver-484vf\" (UID: \"974bf216-052b-49fa-b0ab-b6a46ee1fdcb\") " pod="calico-system/csi-node-driver-484vf" Jan 24 00:30:33.813753 kubelet[2533]: E0124 00:30:33.812449 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.813753 kubelet[2533]: W0124 00:30:33.812477 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.813753 kubelet[2533]: E0124 00:30:33.812490 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.813753 kubelet[2533]: I0124 00:30:33.812516 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/974bf216-052b-49fa-b0ab-b6a46ee1fdcb-registration-dir\") pod \"csi-node-driver-484vf\" (UID: \"974bf216-052b-49fa-b0ab-b6a46ee1fdcb\") " pod="calico-system/csi-node-driver-484vf" Jan 24 00:30:33.814542 kubelet[2533]: E0124 00:30:33.814515 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.814784 kubelet[2533]: W0124 00:30:33.814761 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.815216 kubelet[2533]: E0124 00:30:33.815036 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.819345 kubelet[2533]: E0124 00:30:33.819265 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.819345 kubelet[2533]: W0124 00:30:33.819288 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.819345 kubelet[2533]: E0124 00:30:33.819305 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.819747 kubelet[2533]: E0124 00:30:33.819674 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.819747 kubelet[2533]: W0124 00:30:33.819693 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.819747 kubelet[2533]: E0124 00:30:33.819705 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.820430 kubelet[2533]: I0124 00:30:33.820065 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/974bf216-052b-49fa-b0ab-b6a46ee1fdcb-socket-dir\") pod \"csi-node-driver-484vf\" (UID: \"974bf216-052b-49fa-b0ab-b6a46ee1fdcb\") " pod="calico-system/csi-node-driver-484vf" Jan 24 00:30:33.820430 kubelet[2533]: E0124 00:30:33.820301 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.820430 kubelet[2533]: W0124 00:30:33.820312 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.820430 kubelet[2533]: E0124 00:30:33.820325 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.822316 kubelet[2533]: E0124 00:30:33.822269 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.822316 kubelet[2533]: W0124 00:30:33.822293 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.822316 kubelet[2533]: E0124 00:30:33.822304 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.823088 kubelet[2533]: E0124 00:30:33.822965 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.823088 kubelet[2533]: W0124 00:30:33.822987 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.823088 kubelet[2533]: E0124 00:30:33.823034 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.825171 kubelet[2533]: I0124 00:30:33.825043 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/974bf216-052b-49fa-b0ab-b6a46ee1fdcb-varrun\") pod \"csi-node-driver-484vf\" (UID: \"974bf216-052b-49fa-b0ab-b6a46ee1fdcb\") " pod="calico-system/csi-node-driver-484vf" Jan 24 00:30:33.825448 kubelet[2533]: E0124 00:30:33.825394 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.825448 kubelet[2533]: W0124 00:30:33.825420 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.825448 kubelet[2533]: E0124 00:30:33.825434 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.825913 kubelet[2533]: E0124 00:30:33.825758 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.825913 kubelet[2533]: W0124 00:30:33.825780 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.825913 kubelet[2533]: E0124 00:30:33.825791 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.826174 kubelet[2533]: E0124 00:30:33.826147 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.826174 kubelet[2533]: W0124 00:30:33.826172 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.826258 kubelet[2533]: E0124 00:30:33.826187 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.826531 kubelet[2533]: E0124 00:30:33.826509 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.826531 kubelet[2533]: W0124 00:30:33.826528 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.826596 kubelet[2533]: E0124 00:30:33.826539 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.826968 kubelet[2533]: E0124 00:30:33.826824 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.826968 kubelet[2533]: W0124 00:30:33.826843 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.826968 kubelet[2533]: E0124 00:30:33.826854 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.826968 kubelet[2533]: I0124 00:30:33.826902 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj8qt\" (UniqueName: \"kubernetes.io/projected/974bf216-052b-49fa-b0ab-b6a46ee1fdcb-kube-api-access-tj8qt\") pod \"csi-node-driver-484vf\" (UID: \"974bf216-052b-49fa-b0ab-b6a46ee1fdcb\") " pod="calico-system/csi-node-driver-484vf" Jan 24 00:30:33.827310 kubelet[2533]: E0124 00:30:33.827281 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.827310 kubelet[2533]: W0124 00:30:33.827302 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.827310 kubelet[2533]: E0124 00:30:33.827312 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.827563 kubelet[2533]: E0124 00:30:33.827531 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.827563 kubelet[2533]: W0124 00:30:33.827551 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.827563 kubelet[2533]: E0124 00:30:33.827559 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.843685 containerd[1458]: time="2026-01-24T00:30:33.842351107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:33.843685 containerd[1458]: time="2026-01-24T00:30:33.843449092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:33.843685 containerd[1458]: time="2026-01-24T00:30:33.843512583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:33.843966 containerd[1458]: time="2026-01-24T00:30:33.843761664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:33.889157 systemd[1]: Started cri-containerd-93c09489bc7a65e49dc82f6a93147992925b701868d024747a2172c587f26048.scope - libcontainer container 93c09489bc7a65e49dc82f6a93147992925b701868d024747a2172c587f26048. Jan 24 00:30:33.929077 kubelet[2533]: E0124 00:30:33.929036 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.930069 kubelet[2533]: W0124 00:30:33.930046 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.930186 kubelet[2533]: E0124 00:30:33.930172 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.931314 kubelet[2533]: E0124 00:30:33.931292 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.931431 kubelet[2533]: W0124 00:30:33.931415 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.931579 kubelet[2533]: E0124 00:30:33.931564 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.933038 kubelet[2533]: E0124 00:30:33.931990 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.933143 kubelet[2533]: W0124 00:30:33.933128 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.933218 kubelet[2533]: E0124 00:30:33.933207 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.933710 kubelet[2533]: E0124 00:30:33.933695 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.933798 kubelet[2533]: W0124 00:30:33.933786 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.934026 kubelet[2533]: E0124 00:30:33.933850 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.935331 kubelet[2533]: E0124 00:30:33.935318 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.935432 kubelet[2533]: W0124 00:30:33.935399 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.935432 kubelet[2533]: E0124 00:30:33.935417 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.935827 kubelet[2533]: E0124 00:30:33.935744 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.935827 kubelet[2533]: W0124 00:30:33.935755 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.935827 kubelet[2533]: E0124 00:30:33.935763 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.936687 kubelet[2533]: E0124 00:30:33.936592 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.936687 kubelet[2533]: W0124 00:30:33.936602 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.936687 kubelet[2533]: E0124 00:30:33.936611 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.937972 kubelet[2533]: E0124 00:30:33.937936 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.937972 kubelet[2533]: W0124 00:30:33.937948 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.937972 kubelet[2533]: E0124 00:30:33.937956 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.938509 kubelet[2533]: E0124 00:30:33.938380 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.938509 kubelet[2533]: W0124 00:30:33.938395 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.938509 kubelet[2533]: E0124 00:30:33.938404 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.939184 kubelet[2533]: E0124 00:30:33.939082 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.939184 kubelet[2533]: W0124 00:30:33.939092 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.939184 kubelet[2533]: E0124 00:30:33.939101 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.940060 kubelet[2533]: E0124 00:30:33.940034 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.940203 kubelet[2533]: W0124 00:30:33.940107 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.940203 kubelet[2533]: E0124 00:30:33.940123 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.940444 kubelet[2533]: E0124 00:30:33.940433 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.940554 kubelet[2533]: W0124 00:30:33.940490 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.940645 kubelet[2533]: E0124 00:30:33.940606 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.942277 kubelet[2533]: E0124 00:30:33.942239 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.942277 kubelet[2533]: W0124 00:30:33.942251 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.942277 kubelet[2533]: E0124 00:30:33.942260 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.942799 kubelet[2533]: E0124 00:30:33.942661 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.942799 kubelet[2533]: W0124 00:30:33.942675 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.942799 kubelet[2533]: E0124 00:30:33.942688 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.943156 kubelet[2533]: E0124 00:30:33.943121 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.943156 kubelet[2533]: W0124 00:30:33.943132 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.943156 kubelet[2533]: E0124 00:30:33.943141 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.944353 kubelet[2533]: E0124 00:30:33.944235 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.944353 kubelet[2533]: W0124 00:30:33.944246 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.944353 kubelet[2533]: E0124 00:30:33.944255 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.944556 kubelet[2533]: E0124 00:30:33.944546 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.944629 kubelet[2533]: W0124 00:30:33.944595 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.944629 kubelet[2533]: E0124 00:30:33.944608 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.946157 kubelet[2533]: E0124 00:30:33.946143 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.946342 kubelet[2533]: W0124 00:30:33.946203 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.946342 kubelet[2533]: E0124 00:30:33.946218 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.946538 kubelet[2533]: E0124 00:30:33.946527 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.946589 kubelet[2533]: W0124 00:30:33.946577 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.946635 kubelet[2533]: E0124 00:30:33.946624 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.948225 kubelet[2533]: E0124 00:30:33.948193 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.948225 kubelet[2533]: W0124 00:30:33.948204 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.948225 kubelet[2533]: E0124 00:30:33.948213 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.948669 kubelet[2533]: E0124 00:30:33.948568 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.948669 kubelet[2533]: W0124 00:30:33.948578 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.948669 kubelet[2533]: E0124 00:30:33.948587 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.949436 kubelet[2533]: E0124 00:30:33.949334 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.949436 kubelet[2533]: W0124 00:30:33.949347 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.949436 kubelet[2533]: E0124 00:30:33.949357 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.950020 kubelet[2533]: E0124 00:30:33.949763 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.950020 kubelet[2533]: W0124 00:30:33.949773 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.950020 kubelet[2533]: E0124 00:30:33.949782 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.952011 kubelet[2533]: E0124 00:30:33.951983 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.952011 kubelet[2533]: W0124 00:30:33.952029 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.952011 kubelet[2533]: E0124 00:30:33.952043 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.952833 kubelet[2533]: E0124 00:30:33.952786 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.952833 kubelet[2533]: W0124 00:30:33.952799 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.952833 kubelet[2533]: E0124 00:30:33.952809 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:33.966527 kubelet[2533]: E0124 00:30:33.966318 2533 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:30:33.966833 kubelet[2533]: W0124 00:30:33.966651 2533 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:30:33.966833 kubelet[2533]: E0124 00:30:33.966675 2533 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:30:34.008967 containerd[1458]: time="2026-01-24T00:30:34.008922011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8np4s,Uid:558b022e-3cf4-460a-a08c-cac81b743b69,Namespace:calico-system,Attempt:0,} returns sandbox id \"93c09489bc7a65e49dc82f6a93147992925b701868d024747a2172c587f26048\"" Jan 24 00:30:34.010606 kubelet[2533]: E0124 00:30:34.010139 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:34.939323 containerd[1458]: time="2026-01-24T00:30:34.938344956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:34.942694 containerd[1458]: time="2026-01-24T00:30:34.942657835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:30:34.943244 containerd[1458]: time="2026-01-24T00:30:34.943219588Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:34.947388 containerd[1458]: time="2026-01-24T00:30:34.947359656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:34.948582 containerd[1458]: time="2026-01-24T00:30:34.948560602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.202780347s" Jan 24 00:30:34.948625 containerd[1458]: time="2026-01-24T00:30:34.948585532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:30:34.949677 containerd[1458]: time="2026-01-24T00:30:34.949151954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:30:34.962680 containerd[1458]: time="2026-01-24T00:30:34.962655425Z" level=info msg="CreateContainer within sandbox \"bb44944d90f721f950603472cf8d36a6535ce6da454dd03b1e9338a44bd52454\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:30:34.973587 containerd[1458]: time="2026-01-24T00:30:34.973556614Z" level=info msg="CreateContainer within sandbox \"bb44944d90f721f950603472cf8d36a6535ce6da454dd03b1e9338a44bd52454\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6b8fad2c25a4af710e3b57a3b7a0902eb8e6bc0cd33148e04885c5219e111619\"" Jan 24 00:30:34.974150 containerd[1458]: time="2026-01-24T00:30:34.974129986Z" level=info msg="StartContainer for \"6b8fad2c25a4af710e3b57a3b7a0902eb8e6bc0cd33148e04885c5219e111619\"" Jan 24 00:30:35.010361 systemd[1]: Started cri-containerd-6b8fad2c25a4af710e3b57a3b7a0902eb8e6bc0cd33148e04885c5219e111619.scope - libcontainer container 6b8fad2c25a4af710e3b57a3b7a0902eb8e6bc0cd33148e04885c5219e111619. Jan 24 00:30:35.061151 containerd[1458]: time="2026-01-24T00:30:35.061107909Z" level=info msg="StartContainer for \"6b8fad2c25a4af710e3b57a3b7a0902eb8e6bc0cd33148e04885c5219e111619\" returns successfully" Jan 24 00:30:35.619427 containerd[1458]: time="2026-01-24T00:30:35.619368075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:35.620237 containerd[1458]: time="2026-01-24T00:30:35.620196868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:30:35.621328 containerd[1458]: time="2026-01-24T00:30:35.620856071Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:35.622717 containerd[1458]: time="2026-01-24T00:30:35.622686399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:35.623468 containerd[1458]: time="2026-01-24T00:30:35.623430432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 674.254238ms" Jan 24 00:30:35.623546 containerd[1458]: time="2026-01-24T00:30:35.623530782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:30:35.627507 containerd[1458]: time="2026-01-24T00:30:35.627480989Z" level=info msg="CreateContainer within sandbox \"93c09489bc7a65e49dc82f6a93147992925b701868d024747a2172c587f26048\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:30:35.652408 containerd[1458]: time="2026-01-24T00:30:35.652369623Z" level=info msg="CreateContainer within sandbox \"93c09489bc7a65e49dc82f6a93147992925b701868d024747a2172c587f26048\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4\"" Jan 24 00:30:35.652961 containerd[1458]: time="2026-01-24T00:30:35.652913556Z" level=info msg="StartContainer for \"e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4\"" Jan 24 00:30:35.694164 systemd[1]: Started cri-containerd-e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4.scope - libcontainer container e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4. Jan 24 00:30:35.728771 containerd[1458]: time="2026-01-24T00:30:35.728737854Z" level=info msg="StartContainer for \"e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4\" returns successfully" Jan 24 00:30:35.746946 systemd[1]: cri-containerd-e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4.scope: Deactivated successfully. Jan 24 00:30:35.814687 kubelet[2533]: E0124 00:30:35.814645 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:30:35.832862 containerd[1458]: time="2026-01-24T00:30:35.832633261Z" level=info msg="shim disconnected" id=e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4 namespace=k8s.io Jan 24 00:30:35.832862 containerd[1458]: time="2026-01-24T00:30:35.832705711Z" level=warning msg="cleaning up after shim disconnected" id=e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4 namespace=k8s.io Jan 24 00:30:35.832862 containerd[1458]: time="2026-01-24T00:30:35.832715111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:30:35.896094 kubelet[2533]: E0124 00:30:35.895763 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:35.900142 kubelet[2533]: E0124 00:30:35.900113 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:35.901573 containerd[1458]: time="2026-01-24T00:30:35.901217219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:30:35.926429 kubelet[2533]: I0124 00:30:35.926166 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b56945b9-2cmh5" podStartSLOduration=1.7222437720000001 podStartE2EDuration="2.926149114s" podCreationTimestamp="2026-01-24 00:30:33 +0000 UTC" firstStartedPulling="2026-01-24 00:30:33.745170222 +0000 UTC m=+19.093920645" lastFinishedPulling="2026-01-24 00:30:34.949075554 +0000 UTC m=+20.297825987" observedRunningTime="2026-01-24 00:30:35.911102921 +0000 UTC m=+21.259853404" watchObservedRunningTime="2026-01-24 00:30:35.926149114 +0000 UTC m=+21.274899537" Jan 24 00:30:36.495895 systemd[1]: run-containerd-runc-k8s.io-e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4-runc.Fe4AvJ.mount: Deactivated successfully. Jan 24 00:30:36.496616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e356886a0a5ea9685cc3c646912ba74f69705d74128717ea4d72611426c5d3d4-rootfs.mount: Deactivated successfully. Jan 24 00:30:36.903859 kubelet[2533]: I0124 00:30:36.903809 2533 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:30:36.905168 kubelet[2533]: E0124 00:30:36.905137 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:37.773540 containerd[1458]: time="2026-01-24T00:30:37.773473770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:37.774522 containerd[1458]: time="2026-01-24T00:30:37.774356923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:30:37.778836 containerd[1458]: time="2026-01-24T00:30:37.778776630Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:37.779976 containerd[1458]: time="2026-01-24T00:30:37.779863863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.878551583s" Jan 24 00:30:37.779976 containerd[1458]: time="2026-01-24T00:30:37.779890714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:30:37.780761 containerd[1458]: time="2026-01-24T00:30:37.780569706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:37.786522 containerd[1458]: time="2026-01-24T00:30:37.786470728Z" level=info msg="CreateContainer within sandbox \"93c09489bc7a65e49dc82f6a93147992925b701868d024747a2172c587f26048\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:30:37.810313 containerd[1458]: time="2026-01-24T00:30:37.810261286Z" level=info msg="CreateContainer within sandbox \"93c09489bc7a65e49dc82f6a93147992925b701868d024747a2172c587f26048\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5da2519a1526571aed1470d8fef8f70e2f33c5380e8cc5219c9d393acc117bf2\"" Jan 24 00:30:37.812914 containerd[1458]: time="2026-01-24T00:30:37.810895708Z" level=info msg="StartContainer for \"5da2519a1526571aed1470d8fef8f70e2f33c5380e8cc5219c9d393acc117bf2\"" Jan 24 00:30:37.813317 kubelet[2533]: E0124 00:30:37.813259 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:30:37.855473 systemd[1]: Started cri-containerd-5da2519a1526571aed1470d8fef8f70e2f33c5380e8cc5219c9d393acc117bf2.scope - libcontainer container 5da2519a1526571aed1470d8fef8f70e2f33c5380e8cc5219c9d393acc117bf2. Jan 24 00:30:37.896864 containerd[1458]: time="2026-01-24T00:30:37.896830155Z" level=info msg="StartContainer for \"5da2519a1526571aed1470d8fef8f70e2f33c5380e8cc5219c9d393acc117bf2\" returns successfully" Jan 24 00:30:37.911868 kubelet[2533]: E0124 00:30:37.911817 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:38.385478 containerd[1458]: time="2026-01-24T00:30:38.385406231Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:30:38.391335 systemd[1]: cri-containerd-5da2519a1526571aed1470d8fef8f70e2f33c5380e8cc5219c9d393acc117bf2.scope: Deactivated successfully. Jan 24 00:30:38.417249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5da2519a1526571aed1470d8fef8f70e2f33c5380e8cc5219c9d393acc117bf2-rootfs.mount: Deactivated successfully. Jan 24 00:30:38.458031 kubelet[2533]: I0124 00:30:38.455909 2533 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:30:38.466486 containerd[1458]: time="2026-01-24T00:30:38.466278411Z" level=info msg="shim disconnected" id=5da2519a1526571aed1470d8fef8f70e2f33c5380e8cc5219c9d393acc117bf2 namespace=k8s.io Jan 24 00:30:38.466486 containerd[1458]: time="2026-01-24T00:30:38.466328651Z" level=warning msg="cleaning up after shim disconnected" id=5da2519a1526571aed1470d8fef8f70e2f33c5380e8cc5219c9d393acc117bf2 namespace=k8s.io Jan 24 00:30:38.466486 containerd[1458]: time="2026-01-24T00:30:38.466338361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:30:38.501170 kubelet[2533]: E0124 00:30:38.501137 2533 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-234-200-204\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '172-234-200-204' and this object" logger="UnhandledError" reflector="object-\"calico-apiserver\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 24 00:30:38.501364 kubelet[2533]: E0124 00:30:38.501299 2533 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:172-234-200-204\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '172-234-200-204' and this object" logger="UnhandledError" reflector="object-\"calico-apiserver\"/\"calico-apiserver-certs\"" type="*v1.Secret" Jan 24 00:30:38.512877 systemd[1]: Created slice kubepods-besteffort-pod56b8cbd0_49ec_4c47_9ba0_12b9a7c6d526.slice - libcontainer container kubepods-besteffort-pod56b8cbd0_49ec_4c47_9ba0_12b9a7c6d526.slice. Jan 24 00:30:38.524787 systemd[1]: Created slice kubepods-burstable-pod44c8e029_8edf_43c5_9553_a705dde6d475.slice - libcontainer container kubepods-burstable-pod44c8e029_8edf_43c5_9553_a705dde6d475.slice. Jan 24 00:30:38.534762 systemd[1]: Created slice kubepods-besteffort-poda484550e_d179_4ca3_a2ad_d4ef7f1868f9.slice - libcontainer container kubepods-besteffort-poda484550e_d179_4ca3_a2ad_d4ef7f1868f9.slice. Jan 24 00:30:38.543924 systemd[1]: Created slice kubepods-burstable-pod896b4aca_6a31_459c_a1e1_1b5e3edbde9c.slice - libcontainer container kubepods-burstable-pod896b4aca_6a31_459c_a1e1_1b5e3edbde9c.slice. Jan 24 00:30:38.552739 systemd[1]: Created slice kubepods-besteffort-pod6177d0af_c7ec_41af_a5e7_d14d37e79e3f.slice - libcontainer container kubepods-besteffort-pod6177d0af_c7ec_41af_a5e7_d14d37e79e3f.slice. Jan 24 00:30:38.559691 systemd[1]: Created slice kubepods-besteffort-podabb28b3a_6878_432c_ab4c_0e09969f7334.slice - libcontainer container kubepods-besteffort-podabb28b3a_6878_432c_ab4c_0e09969f7334.slice. Jan 24 00:30:38.567877 systemd[1]: Created slice kubepods-besteffort-pod7a9ec77a_a441_4797_ab37_24de3d316a35.slice - libcontainer container kubepods-besteffort-pod7a9ec77a_a441_4797_ab37_24de3d316a35.slice. Jan 24 00:30:38.574433 kubelet[2533]: I0124 00:30:38.574394 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44c8e029-8edf-43c5-9553-a705dde6d475-config-volume\") pod \"coredns-66bc5c9577-k6lzq\" (UID: \"44c8e029-8edf-43c5-9553-a705dde6d475\") " pod="kube-system/coredns-66bc5c9577-k6lzq" Jan 24 00:30:38.574433 kubelet[2533]: I0124 00:30:38.574430 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abb28b3a-6878-432c-ab4c-0e09969f7334-config\") pod \"goldmane-7c778bb748-pq45k\" (UID: \"abb28b3a-6878-432c-ab4c-0e09969f7334\") " pod="calico-system/goldmane-7c778bb748-pq45k" Jan 24 00:30:38.574552 kubelet[2533]: I0124 00:30:38.574446 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abb28b3a-6878-432c-ab4c-0e09969f7334-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-pq45k\" (UID: \"abb28b3a-6878-432c-ab4c-0e09969f7334\") " pod="calico-system/goldmane-7c778bb748-pq45k" Jan 24 00:30:38.574552 kubelet[2533]: I0124 00:30:38.574460 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg4dq\" (UniqueName: \"kubernetes.io/projected/a484550e-d179-4ca3-a2ad-d4ef7f1868f9-kube-api-access-xg4dq\") pod \"calico-apiserver-94fb7866c-6mcp2\" (UID: \"a484550e-d179-4ca3-a2ad-d4ef7f1868f9\") " pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" Jan 24 00:30:38.574552 kubelet[2533]: I0124 00:30:38.574474 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6177d0af-c7ec-41af-a5e7-d14d37e79e3f-tigera-ca-bundle\") pod \"calico-kube-controllers-7d4dbbbd84-pgmv7\" (UID: \"6177d0af-c7ec-41af-a5e7-d14d37e79e3f\") " pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" Jan 24 00:30:38.574552 kubelet[2533]: I0124 00:30:38.574489 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-whisker-backend-key-pair\") pod \"whisker-59f77f478b-khbcv\" (UID: \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\") " pod="calico-system/whisker-59f77f478b-khbcv" Jan 24 00:30:38.574552 kubelet[2533]: I0124 00:30:38.574505 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdtk\" (UniqueName: \"kubernetes.io/projected/abb28b3a-6878-432c-ab4c-0e09969f7334-kube-api-access-ngdtk\") pod \"goldmane-7c778bb748-pq45k\" (UID: \"abb28b3a-6878-432c-ab4c-0e09969f7334\") " pod="calico-system/goldmane-7c778bb748-pq45k" Jan 24 00:30:38.574668 kubelet[2533]: I0124 00:30:38.574517 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57j29\" (UniqueName: \"kubernetes.io/projected/6177d0af-c7ec-41af-a5e7-d14d37e79e3f-kube-api-access-57j29\") pod \"calico-kube-controllers-7d4dbbbd84-pgmv7\" (UID: \"6177d0af-c7ec-41af-a5e7-d14d37e79e3f\") " pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" Jan 24 00:30:38.574668 kubelet[2533]: I0124 00:30:38.574531 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-whisker-ca-bundle\") pod \"whisker-59f77f478b-khbcv\" (UID: \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\") " pod="calico-system/whisker-59f77f478b-khbcv" Jan 24 00:30:38.574668 kubelet[2533]: I0124 00:30:38.574545 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whcdl\" (UniqueName: \"kubernetes.io/projected/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-kube-api-access-whcdl\") pod \"whisker-59f77f478b-khbcv\" (UID: \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\") " pod="calico-system/whisker-59f77f478b-khbcv" Jan 24 00:30:38.574668 kubelet[2533]: I0124 00:30:38.574559 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/896b4aca-6a31-459c-a1e1-1b5e3edbde9c-config-volume\") pod \"coredns-66bc5c9577-qg8z8\" (UID: \"896b4aca-6a31-459c-a1e1-1b5e3edbde9c\") " pod="kube-system/coredns-66bc5c9577-qg8z8" Jan 24 00:30:38.574668 kubelet[2533]: I0124 00:30:38.574573 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/abb28b3a-6878-432c-ab4c-0e09969f7334-goldmane-key-pair\") pod \"goldmane-7c778bb748-pq45k\" (UID: \"abb28b3a-6878-432c-ab4c-0e09969f7334\") " pod="calico-system/goldmane-7c778bb748-pq45k" Jan 24 00:30:38.575032 kubelet[2533]: I0124 00:30:38.574591 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a484550e-d179-4ca3-a2ad-d4ef7f1868f9-calico-apiserver-certs\") pod \"calico-apiserver-94fb7866c-6mcp2\" (UID: \"a484550e-d179-4ca3-a2ad-d4ef7f1868f9\") " pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" Jan 24 00:30:38.575032 kubelet[2533]: I0124 00:30:38.574606 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gkrs\" (UniqueName: \"kubernetes.io/projected/7a9ec77a-a441-4797-ab37-24de3d316a35-kube-api-access-2gkrs\") pod \"calico-apiserver-94fb7866c-2j9nd\" (UID: \"7a9ec77a-a441-4797-ab37-24de3d316a35\") " pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" Jan 24 00:30:38.575032 kubelet[2533]: I0124 00:30:38.574622 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6mq\" (UniqueName: \"kubernetes.io/projected/44c8e029-8edf-43c5-9553-a705dde6d475-kube-api-access-mq6mq\") pod \"coredns-66bc5c9577-k6lzq\" (UID: \"44c8e029-8edf-43c5-9553-a705dde6d475\") " pod="kube-system/coredns-66bc5c9577-k6lzq" Jan 24 00:30:38.575032 kubelet[2533]: I0124 00:30:38.574634 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lqlk\" (UniqueName: \"kubernetes.io/projected/896b4aca-6a31-459c-a1e1-1b5e3edbde9c-kube-api-access-5lqlk\") pod \"coredns-66bc5c9577-qg8z8\" (UID: \"896b4aca-6a31-459c-a1e1-1b5e3edbde9c\") " pod="kube-system/coredns-66bc5c9577-qg8z8" Jan 24 00:30:38.575032 kubelet[2533]: I0124 00:30:38.574648 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a9ec77a-a441-4797-ab37-24de3d316a35-calico-apiserver-certs\") pod \"calico-apiserver-94fb7866c-2j9nd\" (UID: \"7a9ec77a-a441-4797-ab37-24de3d316a35\") " pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" Jan 24 00:30:38.821097 containerd[1458]: time="2026-01-24T00:30:38.821056430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59f77f478b-khbcv,Uid:56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:38.838128 kubelet[2533]: E0124 00:30:38.835662 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:38.848374 containerd[1458]: time="2026-01-24T00:30:38.848345514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k6lzq,Uid:44c8e029-8edf-43c5-9553-a705dde6d475,Namespace:kube-system,Attempt:0,}" Jan 24 00:30:38.850385 kubelet[2533]: E0124 00:30:38.850358 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:38.853310 containerd[1458]: time="2026-01-24T00:30:38.853268261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qg8z8,Uid:896b4aca-6a31-459c-a1e1-1b5e3edbde9c,Namespace:kube-system,Attempt:0,}" Jan 24 00:30:38.861264 containerd[1458]: time="2026-01-24T00:30:38.861232459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4dbbbd84-pgmv7,Uid:6177d0af-c7ec-41af-a5e7-d14d37e79e3f,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:38.868828 containerd[1458]: time="2026-01-24T00:30:38.868774445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pq45k,Uid:abb28b3a-6878-432c-ab4c-0e09969f7334,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:38.922506 kubelet[2533]: E0124 00:30:38.922474 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:38.926269 containerd[1458]: time="2026-01-24T00:30:38.926187884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:30:38.943269 containerd[1458]: time="2026-01-24T00:30:38.943154182Z" level=error msg="Failed to destroy network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:38.943630 containerd[1458]: time="2026-01-24T00:30:38.943604604Z" level=error msg="encountered an error cleaning up failed sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:38.943727 containerd[1458]: time="2026-01-24T00:30:38.943705344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59f77f478b-khbcv,Uid:56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:38.946084 kubelet[2533]: E0124 00:30:38.946043 2533 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:38.946225 kubelet[2533]: E0124 00:30:38.946093 2533 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59f77f478b-khbcv" Jan 24 00:30:38.946225 kubelet[2533]: E0124 00:30:38.946113 2533 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59f77f478b-khbcv" Jan 24 00:30:38.946225 kubelet[2533]: E0124 00:30:38.946153 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59f77f478b-khbcv_calico-system(56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59f77f478b-khbcv_calico-system(56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59f77f478b-khbcv" podUID="56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526" Jan 24 00:30:38.996647 containerd[1458]: time="2026-01-24T00:30:38.996584497Z" level=error msg="Failed to destroy network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:38.998117 containerd[1458]: time="2026-01-24T00:30:38.997962652Z" level=error msg="encountered an error cleaning up failed sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:38.998117 containerd[1458]: time="2026-01-24T00:30:38.998031872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k6lzq,Uid:44c8e029-8edf-43c5-9553-a705dde6d475,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:38.998304 kubelet[2533]: E0124 00:30:38.998235 2533 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:38.998636 kubelet[2533]: E0124 00:30:38.998349 2533 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k6lzq" Jan 24 00:30:38.998636 kubelet[2533]: E0124 00:30:38.998374 2533 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k6lzq" Jan 24 00:30:38.998636 kubelet[2533]: E0124 00:30:38.998439 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-k6lzq_kube-system(44c8e029-8edf-43c5-9553-a705dde6d475)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-k6lzq_kube-system(44c8e029-8edf-43c5-9553-a705dde6d475)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k6lzq" podUID="44c8e029-8edf-43c5-9553-a705dde6d475" Jan 24 00:30:39.027207 containerd[1458]: time="2026-01-24T00:30:39.027167207Z" level=error msg="Failed to destroy network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.027664 containerd[1458]: time="2026-01-24T00:30:39.027630629Z" level=error msg="encountered an error cleaning up failed sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.028291 containerd[1458]: time="2026-01-24T00:30:39.028267761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pq45k,Uid:abb28b3a-6878-432c-ab4c-0e09969f7334,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.028790 kubelet[2533]: E0124 00:30:39.028753 2533 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.028851 kubelet[2533]: E0124 00:30:39.028809 2533 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pq45k" Jan 24 00:30:39.028851 kubelet[2533]: E0124 00:30:39.028826 2533 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pq45k" Jan 24 00:30:39.028901 kubelet[2533]: E0124 00:30:39.028868 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pq45k_calico-system(abb28b3a-6878-432c-ab4c-0e09969f7334)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pq45k_calico-system(abb28b3a-6878-432c-ab4c-0e09969f7334)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:30:39.034041 containerd[1458]: time="2026-01-24T00:30:39.034014460Z" level=error msg="Failed to destroy network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.034709 containerd[1458]: time="2026-01-24T00:30:39.034657992Z" level=error msg="encountered an error cleaning up failed sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.034801 containerd[1458]: time="2026-01-24T00:30:39.034762192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qg8z8,Uid:896b4aca-6a31-459c-a1e1-1b5e3edbde9c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.035287 kubelet[2533]: E0124 00:30:39.035039 2533 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.035287 kubelet[2533]: E0124 00:30:39.035079 2533 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qg8z8" Jan 24 00:30:39.035287 kubelet[2533]: E0124 00:30:39.035095 2533 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qg8z8" Jan 24 00:30:39.035390 kubelet[2533]: E0124 00:30:39.035154 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qg8z8_kube-system(896b4aca-6a31-459c-a1e1-1b5e3edbde9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qg8z8_kube-system(896b4aca-6a31-459c-a1e1-1b5e3edbde9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qg8z8" podUID="896b4aca-6a31-459c-a1e1-1b5e3edbde9c" Jan 24 00:30:39.038984 containerd[1458]: time="2026-01-24T00:30:39.038879965Z" level=error msg="Failed to destroy network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.039518 containerd[1458]: time="2026-01-24T00:30:39.039461027Z" level=error msg="encountered an error cleaning up failed sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.039658 containerd[1458]: time="2026-01-24T00:30:39.039496687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4dbbbd84-pgmv7,Uid:6177d0af-c7ec-41af-a5e7-d14d37e79e3f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.040355 kubelet[2533]: E0124 00:30:39.039909 2533 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.040355 kubelet[2533]: E0124 00:30:39.039936 2533 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" Jan 24 00:30:39.040355 kubelet[2533]: E0124 00:30:39.039952 2533 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" Jan 24 00:30:39.040439 kubelet[2533]: E0124 00:30:39.039984 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d4dbbbd84-pgmv7_calico-system(6177d0af-c7ec-41af-a5e7-d14d37e79e3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d4dbbbd84-pgmv7_calico-system(6177d0af-c7ec-41af-a5e7-d14d37e79e3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:30:39.172926 kubelet[2533]: I0124 00:30:39.171851 2533 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:30:39.172926 kubelet[2533]: E0124 00:30:39.172220 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:39.682727 kubelet[2533]: E0124 00:30:39.682317 2533 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 24 00:30:39.682727 kubelet[2533]: E0124 00:30:39.682408 2533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a484550e-d179-4ca3-a2ad-d4ef7f1868f9-calico-apiserver-certs podName:a484550e-d179-4ca3-a2ad-d4ef7f1868f9 nodeName:}" failed. No retries permitted until 2026-01-24 00:30:40.182387464 +0000 UTC m=+25.531137897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a484550e-d179-4ca3-a2ad-d4ef7f1868f9-calico-apiserver-certs") pod "calico-apiserver-94fb7866c-6mcp2" (UID: "a484550e-d179-4ca3-a2ad-d4ef7f1868f9") : failed to sync secret cache: timed out waiting for the condition Jan 24 00:30:39.682727 kubelet[2533]: E0124 00:30:39.682625 2533 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 24 00:30:39.682727 kubelet[2533]: E0124 00:30:39.682659 2533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a9ec77a-a441-4797-ab37-24de3d316a35-calico-apiserver-certs podName:7a9ec77a-a441-4797-ab37-24de3d316a35 nodeName:}" failed. No retries permitted until 2026-01-24 00:30:40.182650155 +0000 UTC m=+25.531400578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/7a9ec77a-a441-4797-ab37-24de3d316a35-calico-apiserver-certs") pod "calico-apiserver-94fb7866c-2j9nd" (UID: "7a9ec77a-a441-4797-ab37-24de3d316a35") : failed to sync secret cache: timed out waiting for the condition Jan 24 00:30:39.703028 kubelet[2533]: E0124 00:30:39.701640 2533 projected.go:291] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:30:39.703028 kubelet[2533]: E0124 00:30:39.701694 2533 projected.go:196] Error preparing data for projected volume kube-api-access-2gkrs for pod calico-apiserver/calico-apiserver-94fb7866c-2j9nd: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:30:39.703028 kubelet[2533]: E0124 00:30:39.701759 2533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a9ec77a-a441-4797-ab37-24de3d316a35-kube-api-access-2gkrs podName:7a9ec77a-a441-4797-ab37-24de3d316a35 nodeName:}" failed. No retries permitted until 2026-01-24 00:30:40.201746137 +0000 UTC m=+25.550496560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2gkrs" (UniqueName: "kubernetes.io/projected/7a9ec77a-a441-4797-ab37-24de3d316a35-kube-api-access-2gkrs") pod "calico-apiserver-94fb7866c-2j9nd" (UID: "7a9ec77a-a441-4797-ab37-24de3d316a35") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:30:39.703278 kubelet[2533]: E0124 00:30:39.703074 2533 projected.go:291] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:30:39.703278 kubelet[2533]: E0124 00:30:39.703091 2533 projected.go:196] Error preparing data for projected volume kube-api-access-xg4dq for pod calico-apiserver/calico-apiserver-94fb7866c-6mcp2: failed to sync configmap cache: timed out waiting for the condition Jan 24 00:30:39.703278 kubelet[2533]: E0124 00:30:39.703129 2533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a484550e-d179-4ca3-a2ad-d4ef7f1868f9-kube-api-access-xg4dq podName:a484550e-d179-4ca3-a2ad-d4ef7f1868f9 nodeName:}" failed. No retries permitted until 2026-01-24 00:30:40.203118172 +0000 UTC m=+25.551868595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xg4dq" (UniqueName: "kubernetes.io/projected/a484550e-d179-4ca3-a2ad-d4ef7f1868f9-kube-api-access-xg4dq") pod "calico-apiserver-94fb7866c-6mcp2" (UID: "a484550e-d179-4ca3-a2ad-d4ef7f1868f9") : failed to sync configmap cache: timed out waiting for the condition Jan 24 00:30:39.799916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc-shm.mount: Deactivated successfully. Jan 24 00:30:39.800046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e-shm.mount: Deactivated successfully. Jan 24 00:30:39.819114 systemd[1]: Created slice kubepods-besteffort-pod974bf216_052b_49fa_b0ab_b6a46ee1fdcb.slice - libcontainer container kubepods-besteffort-pod974bf216_052b_49fa_b0ab_b6a46ee1fdcb.slice. Jan 24 00:30:39.826185 containerd[1458]: time="2026-01-24T00:30:39.826148601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-484vf,Uid:974bf216-052b-49fa-b0ab-b6a46ee1fdcb,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:39.921712 containerd[1458]: time="2026-01-24T00:30:39.921276660Z" level=error msg="Failed to destroy network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.924128 kubelet[2533]: I0124 00:30:39.923884 2533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:30:39.924575 containerd[1458]: time="2026-01-24T00:30:39.924311830Z" level=error msg="encountered an error cleaning up failed sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.924575 containerd[1458]: time="2026-01-24T00:30:39.924365280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-484vf,Uid:974bf216-052b-49fa-b0ab-b6a46ee1fdcb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.925670 kubelet[2533]: E0124 00:30:39.924899 2533 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:39.925670 kubelet[2533]: E0124 00:30:39.925046 2533 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-484vf" Jan 24 00:30:39.925670 kubelet[2533]: E0124 00:30:39.925066 2533 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-484vf" Jan 24 00:30:39.925770 kubelet[2533]: E0124 00:30:39.925214 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:30:39.926137 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1-shm.mount: Deactivated successfully. Jan 24 00:30:39.928377 containerd[1458]: time="2026-01-24T00:30:39.928229562Z" level=info msg="StopPodSandbox for \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\"" Jan 24 00:30:39.929048 containerd[1458]: time="2026-01-24T00:30:39.928728204Z" level=info msg="Ensure that sandbox 5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254 in task-service has been cleanup successfully" Jan 24 00:30:39.930064 kubelet[2533]: I0124 00:30:39.929740 2533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:30:39.932271 kubelet[2533]: I0124 00:30:39.932149 2533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:30:39.932695 containerd[1458]: time="2026-01-24T00:30:39.932566576Z" level=info msg="StopPodSandbox for \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\"" Jan 24 00:30:39.932816 containerd[1458]: time="2026-01-24T00:30:39.932754887Z" level=info msg="Ensure that sandbox 6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96 in task-service has been cleanup successfully" Jan 24 00:30:39.933444 containerd[1458]: time="2026-01-24T00:30:39.933422279Z" level=info msg="StopPodSandbox for \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\"" Jan 24 00:30:39.933712 containerd[1458]: time="2026-01-24T00:30:39.933618510Z" level=info msg="Ensure that sandbox a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc in task-service has been cleanup successfully" Jan 24 00:30:39.937604 kubelet[2533]: I0124 00:30:39.937132 2533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:30:39.937666 containerd[1458]: time="2026-01-24T00:30:39.937464482Z" level=info msg="StopPodSandbox for \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\"" Jan 24 00:30:39.937811 containerd[1458]: time="2026-01-24T00:30:39.937790093Z" level=info msg="Ensure that sandbox df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e in task-service has been cleanup successfully" Jan 24 00:30:39.941316 kubelet[2533]: I0124 00:30:39.941288 2533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:30:39.943087 kubelet[2533]: E0124 00:30:39.943063 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:39.943394 containerd[1458]: time="2026-01-24T00:30:39.942947230Z" level=info msg="StopPodSandbox for \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\"" Jan 24 00:30:39.944185 containerd[1458]: time="2026-01-24T00:30:39.944142124Z" level=info msg="Ensure that sandbox 2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936 in task-service has been cleanup successfully" Jan 24 00:30:40.020847 containerd[1458]: time="2026-01-24T00:30:40.020733799Z" level=error msg="StopPodSandbox for \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\" failed" error="failed to destroy network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.021047 kubelet[2533]: E0124 00:30:40.020953 2533 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:30:40.021047 kubelet[2533]: E0124 00:30:40.021013 2533 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96"} Jan 24 00:30:40.021220 kubelet[2533]: E0124 00:30:40.021070 2533 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"896b4aca-6a31-459c-a1e1-1b5e3edbde9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:40.021220 kubelet[2533]: E0124 00:30:40.021111 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"896b4aca-6a31-459c-a1e1-1b5e3edbde9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qg8z8" podUID="896b4aca-6a31-459c-a1e1-1b5e3edbde9c" Jan 24 00:30:40.021820 containerd[1458]: time="2026-01-24T00:30:40.021796692Z" level=error msg="StopPodSandbox for \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\" failed" error="failed to destroy network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.022051 kubelet[2533]: E0124 00:30:40.021991 2533 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:30:40.022135 kubelet[2533]: E0124 00:30:40.022055 2533 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254"} Jan 24 00:30:40.022135 kubelet[2533]: E0124 00:30:40.022078 2533 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"abb28b3a-6878-432c-ab4c-0e09969f7334\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:40.022135 kubelet[2533]: E0124 00:30:40.022110 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"abb28b3a-6878-432c-ab4c-0e09969f7334\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:30:40.023578 containerd[1458]: time="2026-01-24T00:30:40.023555687Z" level=error msg="StopPodSandbox for \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\" failed" error="failed to destroy network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.023837 kubelet[2533]: E0124 00:30:40.023731 2533 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:30:40.023837 kubelet[2533]: E0124 00:30:40.023756 2533 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc"} Jan 24 00:30:40.023837 kubelet[2533]: E0124 00:30:40.023778 2533 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44c8e029-8edf-43c5-9553-a705dde6d475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:40.023837 kubelet[2533]: E0124 00:30:40.023799 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44c8e029-8edf-43c5-9553-a705dde6d475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k6lzq" podUID="44c8e029-8edf-43c5-9553-a705dde6d475" Jan 24 00:30:40.027895 containerd[1458]: time="2026-01-24T00:30:40.027447019Z" level=error msg="StopPodSandbox for \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\" failed" error="failed to destroy network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.027939 kubelet[2533]: E0124 00:30:40.027723 2533 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:30:40.027939 kubelet[2533]: E0124 00:30:40.027748 2533 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e"} Jan 24 00:30:40.027939 kubelet[2533]: E0124 00:30:40.027768 2533 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:40.027939 kubelet[2533]: E0124 00:30:40.027786 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59f77f478b-khbcv" podUID="56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526" Jan 24 00:30:40.029291 containerd[1458]: time="2026-01-24T00:30:40.029237734Z" level=error msg="StopPodSandbox for \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\" failed" error="failed to destroy network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.029501 kubelet[2533]: E0124 00:30:40.029377 2533 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:30:40.029501 kubelet[2533]: E0124 00:30:40.029426 2533 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936"} Jan 24 00:30:40.029618 kubelet[2533]: E0124 00:30:40.029571 2533 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6177d0af-c7ec-41af-a5e7-d14d37e79e3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:40.029618 kubelet[2533]: E0124 00:30:40.029595 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6177d0af-c7ec-41af-a5e7-d14d37e79e3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:30:40.341640 containerd[1458]: time="2026-01-24T00:30:40.341596265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94fb7866c-6mcp2,Uid:a484550e-d179-4ca3-a2ad-d4ef7f1868f9,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:30:40.374055 containerd[1458]: time="2026-01-24T00:30:40.373100941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94fb7866c-2j9nd,Uid:7a9ec77a-a441-4797-ab37-24de3d316a35,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:30:40.415661 containerd[1458]: time="2026-01-24T00:30:40.415612060Z" level=error msg="Failed to destroy network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.416130 containerd[1458]: time="2026-01-24T00:30:40.416106712Z" level=error msg="encountered an error cleaning up failed sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.416231 containerd[1458]: time="2026-01-24T00:30:40.416208782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94fb7866c-6mcp2,Uid:a484550e-d179-4ca3-a2ad-d4ef7f1868f9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.416537 kubelet[2533]: E0124 00:30:40.416508 2533 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.416648 kubelet[2533]: E0124 00:30:40.416630 2533 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" Jan 24 00:30:40.416725 kubelet[2533]: E0124 00:30:40.416711 2533 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" Jan 24 00:30:40.416857 kubelet[2533]: E0124 00:30:40.416823 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94fb7866c-6mcp2_calico-apiserver(a484550e-d179-4ca3-a2ad-d4ef7f1868f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94fb7866c-6mcp2_calico-apiserver(a484550e-d179-4ca3-a2ad-d4ef7f1868f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:30:40.473451 containerd[1458]: time="2026-01-24T00:30:40.473379206Z" level=error msg="Failed to destroy network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.474288 containerd[1458]: time="2026-01-24T00:30:40.474264629Z" level=error msg="encountered an error cleaning up failed sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.474349 containerd[1458]: time="2026-01-24T00:30:40.474308769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94fb7866c-2j9nd,Uid:7a9ec77a-a441-4797-ab37-24de3d316a35,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.474920 kubelet[2533]: E0124 00:30:40.474565 2533 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:40.474920 kubelet[2533]: E0124 00:30:40.474623 2533 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" Jan 24 00:30:40.474920 kubelet[2533]: E0124 00:30:40.474639 2533 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" Jan 24 00:30:40.475048 kubelet[2533]: E0124 00:30:40.474685 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94fb7866c-2j9nd_calico-apiserver(7a9ec77a-a441-4797-ab37-24de3d316a35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94fb7866c-2j9nd_calico-apiserver(7a9ec77a-a441-4797-ab37-24de3d316a35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:30:40.944400 kubelet[2533]: I0124 00:30:40.944308 2533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:30:40.945383 containerd[1458]: time="2026-01-24T00:30:40.944956501Z" level=info msg="StopPodSandbox for \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\"" Jan 24 00:30:40.945383 containerd[1458]: time="2026-01-24T00:30:40.945153092Z" level=info msg="Ensure that sandbox 0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741 in task-service has been cleanup successfully" Jan 24 00:30:40.949134 kubelet[2533]: I0124 00:30:40.949107 2533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:30:40.950062 containerd[1458]: time="2026-01-24T00:30:40.949778856Z" level=info msg="StopPodSandbox for \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\"" Jan 24 00:30:40.950726 containerd[1458]: time="2026-01-24T00:30:40.950382318Z" level=info msg="Ensure that sandbox 3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1 in task-service has been cleanup successfully" Jan 24 00:30:40.969603 kubelet[2533]: I0124 00:30:40.969579 2533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:30:40.970599 containerd[1458]: time="2026-01-24T00:30:40.970414888Z" level=info msg="StopPodSandbox for \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\"" Jan 24 00:30:40.972144 containerd[1458]: time="2026-01-24T00:30:40.972116524Z" level=info msg="Ensure that sandbox 9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965 in task-service has been cleanup successfully" Jan 24 00:30:41.012510 containerd[1458]: time="2026-01-24T00:30:41.012317704Z" level=error msg="StopPodSandbox for \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\" failed" error="failed to destroy network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:41.014225 kubelet[2533]: E0124 00:30:41.014123 2533 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:30:41.014225 kubelet[2533]: E0124 00:30:41.014170 2533 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741"} Jan 24 00:30:41.014225 kubelet[2533]: E0124 00:30:41.014200 2533 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a484550e-d179-4ca3-a2ad-d4ef7f1868f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:41.014406 kubelet[2533]: E0124 00:30:41.014224 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a484550e-d179-4ca3-a2ad-d4ef7f1868f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:30:41.023313 containerd[1458]: time="2026-01-24T00:30:41.023281525Z" level=error msg="StopPodSandbox for \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\" failed" error="failed to destroy network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:41.023834 kubelet[2533]: E0124 00:30:41.023728 2533 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:30:41.023834 kubelet[2533]: E0124 00:30:41.023789 2533 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965"} Jan 24 00:30:41.024087 kubelet[2533]: E0124 00:30:41.023814 2533 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a9ec77a-a441-4797-ab37-24de3d316a35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:41.024087 kubelet[2533]: E0124 00:30:41.024062 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a9ec77a-a441-4797-ab37-24de3d316a35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:30:41.028343 containerd[1458]: time="2026-01-24T00:30:41.028303590Z" level=error msg="StopPodSandbox for \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\" failed" error="failed to destroy network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:30:41.028465 kubelet[2533]: E0124 00:30:41.028441 2533 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:30:41.028498 kubelet[2533]: E0124 00:30:41.028473 2533 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1"} Jan 24 00:30:41.028498 kubelet[2533]: E0124 00:30:41.028495 2533 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"974bf216-052b-49fa-b0ab-b6a46ee1fdcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:30:41.028624 kubelet[2533]: E0124 00:30:41.028519 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"974bf216-052b-49fa-b0ab-b6a46ee1fdcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:30:42.584009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2631976352.mount: Deactivated successfully. Jan 24 00:30:42.618897 containerd[1458]: time="2026-01-24T00:30:42.618855377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:42.621600 containerd[1458]: time="2026-01-24T00:30:42.620620821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:30:42.621600 containerd[1458]: time="2026-01-24T00:30:42.620694982Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:42.623028 containerd[1458]: time="2026-01-24T00:30:42.622975268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:30:42.623745 containerd[1458]: time="2026-01-24T00:30:42.623722190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.697500426s" Jan 24 00:30:42.623822 containerd[1458]: time="2026-01-24T00:30:42.623807400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:30:42.650621 containerd[1458]: time="2026-01-24T00:30:42.650564092Z" level=info msg="CreateContainer within sandbox \"93c09489bc7a65e49dc82f6a93147992925b701868d024747a2172c587f26048\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:30:42.665026 containerd[1458]: time="2026-01-24T00:30:42.664969620Z" level=info msg="CreateContainer within sandbox \"93c09489bc7a65e49dc82f6a93147992925b701868d024747a2172c587f26048\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"43f8250a43fa222e872a52b43ba279b0445397013a21771d8059b23936caa2a8\"" Jan 24 00:30:42.666493 containerd[1458]: time="2026-01-24T00:30:42.666473304Z" level=info msg="StartContainer for \"43f8250a43fa222e872a52b43ba279b0445397013a21771d8059b23936caa2a8\"" Jan 24 00:30:42.697146 systemd[1]: Started cri-containerd-43f8250a43fa222e872a52b43ba279b0445397013a21771d8059b23936caa2a8.scope - libcontainer container 43f8250a43fa222e872a52b43ba279b0445397013a21771d8059b23936caa2a8. Jan 24 00:30:42.729026 containerd[1458]: time="2026-01-24T00:30:42.728976611Z" level=info msg="StartContainer for \"43f8250a43fa222e872a52b43ba279b0445397013a21771d8059b23936caa2a8\" returns successfully" Jan 24 00:30:42.822455 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:30:42.822536 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:30:42.922099 containerd[1458]: time="2026-01-24T00:30:42.920649764Z" level=info msg="StopPodSandbox for \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\"" Jan 24 00:30:42.980078 kubelet[2533]: E0124 00:30:42.980044 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:43.021478 kubelet[2533]: I0124 00:30:43.021249 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8np4s" podStartSLOduration=1.40844089 podStartE2EDuration="10.021231599s" podCreationTimestamp="2026-01-24 00:30:33 +0000 UTC" firstStartedPulling="2026-01-24 00:30:34.011751583 +0000 UTC m=+19.360502006" lastFinishedPulling="2026-01-24 00:30:42.624542292 +0000 UTC m=+27.973292715" observedRunningTime="2026-01-24 00:30:43.004257497 +0000 UTC m=+28.353007920" watchObservedRunningTime="2026-01-24 00:30:43.021231599 +0000 UTC m=+28.369982022" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.019 [INFO][3713] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.020 [INFO][3713] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" iface="eth0" netns="/var/run/netns/cni-26dd3df4-9bc9-c512-d3b8-2e23167b2efe" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.021 [INFO][3713] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" iface="eth0" netns="/var/run/netns/cni-26dd3df4-9bc9-c512-d3b8-2e23167b2efe" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.021 [INFO][3713] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" iface="eth0" netns="/var/run/netns/cni-26dd3df4-9bc9-c512-d3b8-2e23167b2efe" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.021 [INFO][3713] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.022 [INFO][3713] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.044 [INFO][3727] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" HandleID="k8s-pod-network.df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Workload="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.044 [INFO][3727] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.045 [INFO][3727] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.055 [WARNING][3727] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" HandleID="k8s-pod-network.df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Workload="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.055 [INFO][3727] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" HandleID="k8s-pod-network.df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Workload="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.056 [INFO][3727] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:43.063146 containerd[1458]: 2026-01-24 00:30:43.060 [INFO][3713] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:30:43.064052 containerd[1458]: time="2026-01-24T00:30:43.063693006Z" level=info msg="TearDown network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\" successfully" Jan 24 00:30:43.064052 containerd[1458]: time="2026-01-24T00:30:43.063719676Z" level=info msg="StopPodSandbox for \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\" returns successfully" Jan 24 00:30:43.106905 kubelet[2533]: I0124 00:30:43.106879 2533 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-whisker-backend-key-pair\") pod \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\" (UID: \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\") " Jan 24 00:30:43.107194 kubelet[2533]: I0124 00:30:43.107061 2533 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whcdl\" (UniqueName: \"kubernetes.io/projected/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-kube-api-access-whcdl\") pod \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\" (UID: \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\") " Jan 24 00:30:43.107194 kubelet[2533]: I0124 00:30:43.107094 2533 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-whisker-ca-bundle\") pod \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\" (UID: \"56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526\") " Jan 24 00:30:43.109219 kubelet[2533]: I0124 00:30:43.108757 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526" (UID: "56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:30:43.110898 kubelet[2533]: I0124 00:30:43.110853 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526" (UID: "56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:30:43.112773 kubelet[2533]: I0124 00:30:43.112753 2533 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-kube-api-access-whcdl" (OuterVolumeSpecName: "kube-api-access-whcdl") pod "56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526" (UID: "56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526"). InnerVolumeSpecName "kube-api-access-whcdl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:30:43.208720 kubelet[2533]: I0124 00:30:43.208583 2533 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-whisker-backend-key-pair\") on node \"172-234-200-204\" DevicePath \"\"" Jan 24 00:30:43.208720 kubelet[2533]: I0124 00:30:43.208616 2533 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-whcdl\" (UniqueName: \"kubernetes.io/projected/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-kube-api-access-whcdl\") on node \"172-234-200-204\" DevicePath \"\"" Jan 24 00:30:43.208720 kubelet[2533]: I0124 00:30:43.208626 2533 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526-whisker-ca-bundle\") on node \"172-234-200-204\" DevicePath \"\"" Jan 24 00:30:43.585797 systemd[1]: run-netns-cni\x2d26dd3df4\x2d9bc9\x2dc512\x2dd3b8\x2d2e23167b2efe.mount: Deactivated successfully. Jan 24 00:30:43.585994 systemd[1]: var-lib-kubelet-pods-56b8cbd0\x2d49ec\x2d4c47\x2d9ba0\x2d12b9a7c6d526-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwhcdl.mount: Deactivated successfully. Jan 24 00:30:43.586180 systemd[1]: var-lib-kubelet-pods-56b8cbd0\x2d49ec\x2d4c47\x2d9ba0\x2d12b9a7c6d526-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:30:43.985506 kubelet[2533]: I0124 00:30:43.984760 2533 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:30:43.985506 kubelet[2533]: E0124 00:30:43.985090 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:43.989469 systemd[1]: Removed slice kubepods-besteffort-pod56b8cbd0_49ec_4c47_9ba0_12b9a7c6d526.slice - libcontainer container kubepods-besteffort-pod56b8cbd0_49ec_4c47_9ba0_12b9a7c6d526.slice. Jan 24 00:30:44.052909 systemd[1]: Created slice kubepods-besteffort-pod1f8681ee_3380_4dd8_9bb7_c40be678fb1b.slice - libcontainer container kubepods-besteffort-pod1f8681ee_3380_4dd8_9bb7_c40be678fb1b.slice. Jan 24 00:30:44.116817 kubelet[2533]: I0124 00:30:44.116288 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8681ee-3380-4dd8-9bb7-c40be678fb1b-whisker-ca-bundle\") pod \"whisker-86547fc664-566mp\" (UID: \"1f8681ee-3380-4dd8-9bb7-c40be678fb1b\") " pod="calico-system/whisker-86547fc664-566mp" Jan 24 00:30:44.116817 kubelet[2533]: I0124 00:30:44.116361 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1f8681ee-3380-4dd8-9bb7-c40be678fb1b-whisker-backend-key-pair\") pod \"whisker-86547fc664-566mp\" (UID: \"1f8681ee-3380-4dd8-9bb7-c40be678fb1b\") " pod="calico-system/whisker-86547fc664-566mp" Jan 24 00:30:44.116817 kubelet[2533]: I0124 00:30:44.116421 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzw9t\" (UniqueName: \"kubernetes.io/projected/1f8681ee-3380-4dd8-9bb7-c40be678fb1b-kube-api-access-bzw9t\") pod \"whisker-86547fc664-566mp\" (UID: \"1f8681ee-3380-4dd8-9bb7-c40be678fb1b\") " pod="calico-system/whisker-86547fc664-566mp" Jan 24 00:30:44.363978 containerd[1458]: time="2026-01-24T00:30:44.363933169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86547fc664-566mp,Uid:1f8681ee-3380-4dd8-9bb7-c40be678fb1b,Namespace:calico-system,Attempt:0,}" Jan 24 00:30:44.564793 systemd-networkd[1381]: cali0efdb6e08ae: Link UP Jan 24 00:30:44.565564 systemd-networkd[1381]: cali0efdb6e08ae: Gained carrier Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.423 [INFO][3834] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.433 [INFO][3834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--204-k8s-whisker--86547fc664--566mp-eth0 whisker-86547fc664- calico-system 1f8681ee-3380-4dd8-9bb7-c40be678fb1b 904 0 2026-01-24 00:30:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86547fc664 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-200-204 whisker-86547fc664-566mp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0efdb6e08ae [] [] }} ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Namespace="calico-system" Pod="whisker-86547fc664-566mp" WorkloadEndpoint="172--234--200--204-k8s-whisker--86547fc664--566mp-" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.433 [INFO][3834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Namespace="calico-system" Pod="whisker-86547fc664-566mp" WorkloadEndpoint="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.503 [INFO][3845] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" HandleID="k8s-pod-network.be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Workload="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.503 [INFO][3845] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" HandleID="k8s-pod-network.be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Workload="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-204", "pod":"whisker-86547fc664-566mp", "timestamp":"2026-01-24 00:30:44.503281377 +0000 UTC"}, Hostname:"172-234-200-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.503 [INFO][3845] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.503 [INFO][3845] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.503 [INFO][3845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-204' Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.516 [INFO][3845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" host="172-234-200-204" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.521 [INFO][3845] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-204" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.525 [INFO][3845] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.526 [INFO][3845] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.528 [INFO][3845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.529 [INFO][3845] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" host="172-234-200-204" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.530 [INFO][3845] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.534 [INFO][3845] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" host="172-234-200-204" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.544 [INFO][3845] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.193/26] block=192.168.69.192/26 handle="k8s-pod-network.be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" host="172-234-200-204" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.544 [INFO][3845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.193/26] handle="k8s-pod-network.be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" host="172-234-200-204" Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.544 [INFO][3845] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:44.604903 containerd[1458]: 2026-01-24 00:30:44.545 [INFO][3845] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.193/26] IPv6=[] ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" HandleID="k8s-pod-network.be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Workload="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" Jan 24 00:30:44.605508 containerd[1458]: 2026-01-24 00:30:44.549 [INFO][3834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Namespace="calico-system" Pod="whisker-86547fc664-566mp" WorkloadEndpoint="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-whisker--86547fc664--566mp-eth0", GenerateName:"whisker-86547fc664-", Namespace:"calico-system", SelfLink:"", UID:"1f8681ee-3380-4dd8-9bb7-c40be678fb1b", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86547fc664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"", Pod:"whisker-86547fc664-566mp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0efdb6e08ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:44.605508 containerd[1458]: 2026-01-24 00:30:44.549 [INFO][3834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.193/32] ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Namespace="calico-system" Pod="whisker-86547fc664-566mp" WorkloadEndpoint="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" Jan 24 00:30:44.605508 containerd[1458]: 2026-01-24 00:30:44.549 [INFO][3834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0efdb6e08ae ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Namespace="calico-system" Pod="whisker-86547fc664-566mp" WorkloadEndpoint="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" Jan 24 00:30:44.605508 containerd[1458]: 2026-01-24 00:30:44.567 [INFO][3834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Namespace="calico-system" Pod="whisker-86547fc664-566mp" WorkloadEndpoint="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" Jan 24 00:30:44.605508 containerd[1458]: 2026-01-24 00:30:44.567 [INFO][3834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Namespace="calico-system" Pod="whisker-86547fc664-566mp" WorkloadEndpoint="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-whisker--86547fc664--566mp-eth0", GenerateName:"whisker-86547fc664-", Namespace:"calico-system", SelfLink:"", UID:"1f8681ee-3380-4dd8-9bb7-c40be678fb1b", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86547fc664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd", Pod:"whisker-86547fc664-566mp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0efdb6e08ae", MAC:"d2:32:2b:1e:db:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:44.605508 containerd[1458]: 2026-01-24 00:30:44.596 [INFO][3834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd" Namespace="calico-system" Pod="whisker-86547fc664-566mp" WorkloadEndpoint="172--234--200--204-k8s-whisker--86547fc664--566mp-eth0" Jan 24 00:30:44.651131 containerd[1458]: time="2026-01-24T00:30:44.649355900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:44.651378 containerd[1458]: time="2026-01-24T00:30:44.651299325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:44.651492 containerd[1458]: time="2026-01-24T00:30:44.651442605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:44.656546 containerd[1458]: time="2026-01-24T00:30:44.656331877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:44.688872 systemd[1]: Started cri-containerd-be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd.scope - libcontainer container be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd. Jan 24 00:30:44.787173 kernel: bpftool[3939]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:30:44.789707 containerd[1458]: time="2026-01-24T00:30:44.789666180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86547fc664-566mp,Uid:1f8681ee-3380-4dd8-9bb7-c40be678fb1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"be1e2526fac4d8db772e3c04baf0ed7168e5073d5d646397b5c0e7c5710e5abd\"" Jan 24 00:30:44.793238 containerd[1458]: time="2026-01-24T00:30:44.793042168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:30:44.816732 kubelet[2533]: I0124 00:30:44.816699 2533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526" path="/var/lib/kubelet/pods/56b8cbd0-49ec-4c47-9ba0-12b9a7c6d526/volumes" Jan 24 00:30:44.922872 containerd[1458]: time="2026-01-24T00:30:44.922639012Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:44.923604 containerd[1458]: time="2026-01-24T00:30:44.923503034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:30:44.923604 containerd[1458]: time="2026-01-24T00:30:44.923570245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:30:44.924275 kubelet[2533]: E0124 00:30:44.923790 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:30:44.924275 kubelet[2533]: E0124 00:30:44.923823 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:30:44.924275 kubelet[2533]: E0124 00:30:44.923883 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86547fc664-566mp_calico-system(1f8681ee-3380-4dd8-9bb7-c40be678fb1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:44.926056 containerd[1458]: time="2026-01-24T00:30:44.926033710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:30:45.040743 systemd-networkd[1381]: vxlan.calico: Link UP Jan 24 00:30:45.040755 systemd-networkd[1381]: vxlan.calico: Gained carrier Jan 24 00:30:45.056558 containerd[1458]: time="2026-01-24T00:30:45.056517089Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:45.058234 containerd[1458]: time="2026-01-24T00:30:45.058178143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:30:45.058433 containerd[1458]: time="2026-01-24T00:30:45.058317553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:30:45.058673 kubelet[2533]: E0124 00:30:45.058626 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:30:45.058963 kubelet[2533]: E0124 00:30:45.058675 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:30:45.058963 kubelet[2533]: E0124 00:30:45.058736 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86547fc664-566mp_calico-system(1f8681ee-3380-4dd8-9bb7-c40be678fb1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:45.058963 kubelet[2533]: E0124 00:30:45.058775 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:30:45.992175 kubelet[2533]: E0124 00:30:45.992118 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:30:46.036214 systemd-networkd[1381]: cali0efdb6e08ae: Gained IPv6LL Jan 24 00:30:46.676806 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Jan 24 00:30:50.817267 containerd[1458]: time="2026-01-24T00:30:50.816891432Z" level=info msg="StopPodSandbox for \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\"" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.860 [INFO][4033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.860 [INFO][4033] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" iface="eth0" netns="/var/run/netns/cni-7471f16f-b9d0-9064-26b5-492e83c39c2a" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.861 [INFO][4033] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" iface="eth0" netns="/var/run/netns/cni-7471f16f-b9d0-9064-26b5-492e83c39c2a" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.862 [INFO][4033] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" iface="eth0" netns="/var/run/netns/cni-7471f16f-b9d0-9064-26b5-492e83c39c2a" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.862 [INFO][4033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.862 [INFO][4033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.893 [INFO][4041] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" HandleID="k8s-pod-network.a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.893 [INFO][4041] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.893 [INFO][4041] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.899 [WARNING][4041] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" HandleID="k8s-pod-network.a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.899 [INFO][4041] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" HandleID="k8s-pod-network.a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.900 [INFO][4041] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:50.908028 containerd[1458]: 2026-01-24 00:30:50.902 [INFO][4033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:30:50.908953 containerd[1458]: time="2026-01-24T00:30:50.908502898Z" level=info msg="TearDown network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\" successfully" Jan 24 00:30:50.908953 containerd[1458]: time="2026-01-24T00:30:50.908531888Z" level=info msg="StopPodSandbox for \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\" returns successfully" Jan 24 00:30:50.909635 systemd[1]: run-netns-cni\x2d7471f16f\x2db9d0\x2d9064\x2d26b5\x2d492e83c39c2a.mount: Deactivated successfully. Jan 24 00:30:50.911053 kubelet[2533]: E0124 00:30:50.910976 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:50.912200 containerd[1458]: time="2026-01-24T00:30:50.911889104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k6lzq,Uid:44c8e029-8edf-43c5-9553-a705dde6d475,Namespace:kube-system,Attempt:1,}" Jan 24 00:30:51.035151 systemd-networkd[1381]: cali66575f2af4b: Link UP Jan 24 00:30:51.035621 systemd-networkd[1381]: cali66575f2af4b: Gained carrier Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:50.959 [INFO][4047] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0 coredns-66bc5c9577- kube-system 44c8e029-8edf-43c5-9553-a705dde6d475 938 0 2026-01-24 00:30:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-200-204 coredns-66bc5c9577-k6lzq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali66575f2af4b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Namespace="kube-system" Pod="coredns-66bc5c9577-k6lzq" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:50.960 [INFO][4047] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Namespace="kube-system" Pod="coredns-66bc5c9577-k6lzq" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:50.984 [INFO][4059] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" HandleID="k8s-pod-network.b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:50.984 [INFO][4059] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" HandleID="k8s-pod-network.b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-200-204", "pod":"coredns-66bc5c9577-k6lzq", "timestamp":"2026-01-24 00:30:50.984171209 +0000 UTC"}, Hostname:"172-234-200-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:50.984 [INFO][4059] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:50.984 [INFO][4059] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:50.984 [INFO][4059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-204' Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:50.995 [INFO][4059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" host="172-234-200-204" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.003 [INFO][4059] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-204" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.007 [INFO][4059] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.009 [INFO][4059] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.014 [INFO][4059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.014 [INFO][4059] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" host="172-234-200-204" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.015 [INFO][4059] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1 Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.023 [INFO][4059] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" host="172-234-200-204" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.028 [INFO][4059] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.194/26] block=192.168.69.192/26 handle="k8s-pod-network.b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" host="172-234-200-204" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.028 [INFO][4059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.194/26] handle="k8s-pod-network.b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" host="172-234-200-204" Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.028 [INFO][4059] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:51.049918 containerd[1458]: 2026-01-24 00:30:51.028 [INFO][4059] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.194/26] IPv6=[] ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" HandleID="k8s-pod-network.b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:51.052227 containerd[1458]: 2026-01-24 00:30:51.031 [INFO][4047] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Namespace="kube-system" Pod="coredns-66bc5c9577-k6lzq" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"44c8e029-8edf-43c5-9553-a705dde6d475", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"", Pod:"coredns-66bc5c9577-k6lzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66575f2af4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:51.052227 containerd[1458]: 2026-01-24 00:30:51.031 [INFO][4047] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.194/32] ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Namespace="kube-system" Pod="coredns-66bc5c9577-k6lzq" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:51.052227 containerd[1458]: 2026-01-24 00:30:51.031 [INFO][4047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66575f2af4b ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Namespace="kube-system" Pod="coredns-66bc5c9577-k6lzq" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:51.052227 containerd[1458]: 2026-01-24 00:30:51.034 [INFO][4047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Namespace="kube-system" Pod="coredns-66bc5c9577-k6lzq" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:51.052227 containerd[1458]: 2026-01-24 00:30:51.034 [INFO][4047] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Namespace="kube-system" Pod="coredns-66bc5c9577-k6lzq" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"44c8e029-8edf-43c5-9553-a705dde6d475", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1", Pod:"coredns-66bc5c9577-k6lzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66575f2af4b", MAC:"12:15:b5:f0:e2:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:51.052227 containerd[1458]: 2026-01-24 00:30:51.043 [INFO][4047] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1" Namespace="kube-system" Pod="coredns-66bc5c9577-k6lzq" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:30:51.072970 containerd[1458]: time="2026-01-24T00:30:51.072618393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:51.072970 containerd[1458]: time="2026-01-24T00:30:51.072661803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:51.072970 containerd[1458]: time="2026-01-24T00:30:51.072675193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:51.072970 containerd[1458]: time="2026-01-24T00:30:51.072753193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:51.111129 systemd[1]: Started cri-containerd-b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1.scope - libcontainer container b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1. Jan 24 00:30:51.148474 containerd[1458]: time="2026-01-24T00:30:51.148394646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k6lzq,Uid:44c8e029-8edf-43c5-9553-a705dde6d475,Namespace:kube-system,Attempt:1,} returns sandbox id \"b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1\"" Jan 24 00:30:51.149368 kubelet[2533]: E0124 00:30:51.149348 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:51.155279 containerd[1458]: time="2026-01-24T00:30:51.155244687Z" level=info msg="CreateContainer within sandbox \"b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:30:51.163613 containerd[1458]: time="2026-01-24T00:30:51.163540509Z" level=info msg="CreateContainer within sandbox \"b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"983e436d1889bddb39090d89356b43f505ba4000f214955503777a2e6643ddd4\"" Jan 24 00:30:51.164230 containerd[1458]: time="2026-01-24T00:30:51.164210100Z" level=info msg="StartContainer for \"983e436d1889bddb39090d89356b43f505ba4000f214955503777a2e6643ddd4\"" Jan 24 00:30:51.196134 systemd[1]: Started cri-containerd-983e436d1889bddb39090d89356b43f505ba4000f214955503777a2e6643ddd4.scope - libcontainer container 983e436d1889bddb39090d89356b43f505ba4000f214955503777a2e6643ddd4. Jan 24 00:30:51.225606 containerd[1458]: time="2026-01-24T00:30:51.225516712Z" level=info msg="StartContainer for \"983e436d1889bddb39090d89356b43f505ba4000f214955503777a2e6643ddd4\" returns successfully" Jan 24 00:30:51.815779 containerd[1458]: time="2026-01-24T00:30:51.815714405Z" level=info msg="StopPodSandbox for \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\"" Jan 24 00:30:51.816386 containerd[1458]: time="2026-01-24T00:30:51.815712945Z" level=info msg="StopPodSandbox for \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\"" Jan 24 00:30:51.915122 systemd[1]: run-containerd-runc-k8s.io-b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1-runc.3ImUfn.mount: Deactivated successfully. Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.895 [INFO][4172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.895 [INFO][4172] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" iface="eth0" netns="/var/run/netns/cni-8fda6f08-fb2f-5e27-80e1-dc3ef4ea0fab" Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.895 [INFO][4172] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" iface="eth0" netns="/var/run/netns/cni-8fda6f08-fb2f-5e27-80e1-dc3ef4ea0fab" Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.896 [INFO][4172] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" iface="eth0" netns="/var/run/netns/cni-8fda6f08-fb2f-5e27-80e1-dc3ef4ea0fab" Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.896 [INFO][4172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.896 [INFO][4172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.925 [INFO][4187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" HandleID="k8s-pod-network.0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.925 [INFO][4187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.925 [INFO][4187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.933 [WARNING][4187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" HandleID="k8s-pod-network.0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.933 [INFO][4187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" HandleID="k8s-pod-network.0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.934 [INFO][4187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:51.940245 containerd[1458]: 2026-01-24 00:30:51.937 [INFO][4172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:30:51.941488 containerd[1458]: time="2026-01-24T00:30:51.941422903Z" level=info msg="TearDown network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\" successfully" Jan 24 00:30:51.941488 containerd[1458]: time="2026-01-24T00:30:51.941481573Z" level=info msg="StopPodSandbox for \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\" returns successfully" Jan 24 00:30:51.944423 containerd[1458]: time="2026-01-24T00:30:51.944373148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94fb7866c-6mcp2,Uid:a484550e-d179-4ca3-a2ad-d4ef7f1868f9,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:30:51.948946 systemd[1]: run-netns-cni\x2d8fda6f08\x2dfb2f\x2d5e27\x2d80e1\x2ddc3ef4ea0fab.mount: Deactivated successfully. Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.891 [INFO][4168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.892 [INFO][4168] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" iface="eth0" netns="/var/run/netns/cni-91d8a7b6-e9cf-b645-ccec-56e9aecc5fcf" Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.893 [INFO][4168] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" iface="eth0" netns="/var/run/netns/cni-91d8a7b6-e9cf-b645-ccec-56e9aecc5fcf" Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.894 [INFO][4168] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" iface="eth0" netns="/var/run/netns/cni-91d8a7b6-e9cf-b645-ccec-56e9aecc5fcf" Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.894 [INFO][4168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.894 [INFO][4168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.932 [INFO][4185] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" HandleID="k8s-pod-network.6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.932 [INFO][4185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.934 [INFO][4185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.941 [WARNING][4185] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" HandleID="k8s-pod-network.6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.941 [INFO][4185] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" HandleID="k8s-pod-network.6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.946 [INFO][4185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:51.958077 containerd[1458]: 2026-01-24 00:30:51.950 [INFO][4168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:30:51.958810 containerd[1458]: time="2026-01-24T00:30:51.958604769Z" level=info msg="TearDown network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\" successfully" Jan 24 00:30:51.958810 containerd[1458]: time="2026-01-24T00:30:51.958629409Z" level=info msg="StopPodSandbox for \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\" returns successfully" Jan 24 00:30:51.965122 kubelet[2533]: E0124 00:30:51.965048 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:51.968548 systemd[1]: run-netns-cni\x2d91d8a7b6\x2de9cf\x2db645\x2dccec\x2d56e9aecc5fcf.mount: Deactivated successfully. Jan 24 00:30:51.970866 containerd[1458]: time="2026-01-24T00:30:51.970712547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qg8z8,Uid:896b4aca-6a31-459c-a1e1-1b5e3edbde9c,Namespace:kube-system,Attempt:1,}" Jan 24 00:30:52.004796 kubelet[2533]: E0124 00:30:52.004739 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:52.057127 kubelet[2533]: I0124 00:30:52.056940 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k6lzq" podStartSLOduration=31.056917281 podStartE2EDuration="31.056917281s" podCreationTimestamp="2026-01-24 00:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:52.031163915 +0000 UTC m=+37.379914358" watchObservedRunningTime="2026-01-24 00:30:52.056917281 +0000 UTC m=+37.405667714" Jan 24 00:30:52.119124 systemd-networkd[1381]: cali66575f2af4b: Gained IPv6LL Jan 24 00:30:52.152136 systemd-networkd[1381]: cali52247adce68: Link UP Jan 24 00:30:52.154091 systemd-networkd[1381]: cali52247adce68: Gained carrier Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.034 [INFO][4199] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0 calico-apiserver-94fb7866c- calico-apiserver a484550e-d179-4ca3-a2ad-d4ef7f1868f9 952 0 2026-01-24 00:30:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94fb7866c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-200-204 calico-apiserver-94fb7866c-6mcp2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali52247adce68 [] [] }} ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-6mcp2" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.034 [INFO][4199] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-6mcp2" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.087 [INFO][4226] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" HandleID="k8s-pod-network.72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.087 [INFO][4226] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" HandleID="k8s-pod-network.72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-200-204", "pod":"calico-apiserver-94fb7866c-6mcp2", "timestamp":"2026-01-24 00:30:52.087481093 +0000 UTC"}, Hostname:"172-234-200-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.087 [INFO][4226] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.087 [INFO][4226] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.087 [INFO][4226] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-204' Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.101 [INFO][4226] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" host="172-234-200-204" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.106 [INFO][4226] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-204" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.113 [INFO][4226] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.118 [INFO][4226] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.121 [INFO][4226] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.122 [INFO][4226] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" host="172-234-200-204" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.123 [INFO][4226] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.128 [INFO][4226] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" host="172-234-200-204" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.136 [INFO][4226] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.195/26] block=192.168.69.192/26 handle="k8s-pod-network.72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" host="172-234-200-204" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.136 [INFO][4226] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.195/26] handle="k8s-pod-network.72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" host="172-234-200-204" Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.136 [INFO][4226] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:52.170934 containerd[1458]: 2026-01-24 00:30:52.136 [INFO][4226] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.195/26] IPv6=[] ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" HandleID="k8s-pod-network.72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:52.172283 containerd[1458]: 2026-01-24 00:30:52.141 [INFO][4199] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-6mcp2" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0", GenerateName:"calico-apiserver-94fb7866c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a484550e-d179-4ca3-a2ad-d4ef7f1868f9", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94fb7866c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"", Pod:"calico-apiserver-94fb7866c-6mcp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52247adce68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:52.172283 containerd[1458]: 2026-01-24 00:30:52.141 [INFO][4199] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.195/32] ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-6mcp2" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:52.172283 containerd[1458]: 2026-01-24 00:30:52.141 [INFO][4199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52247adce68 ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-6mcp2" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:52.172283 containerd[1458]: 2026-01-24 00:30:52.151 [INFO][4199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-6mcp2" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:52.172283 containerd[1458]: 2026-01-24 00:30:52.151 [INFO][4199] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-6mcp2" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0", GenerateName:"calico-apiserver-94fb7866c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a484550e-d179-4ca3-a2ad-d4ef7f1868f9", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94fb7866c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec", Pod:"calico-apiserver-94fb7866c-6mcp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52247adce68", MAC:"0e:c8:0f:f9:a6:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:52.172283 containerd[1458]: 2026-01-24 00:30:52.166 [INFO][4199] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-6mcp2" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:30:52.210048 containerd[1458]: time="2026-01-24T00:30:52.209429654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:52.210048 containerd[1458]: time="2026-01-24T00:30:52.209487445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:52.210048 containerd[1458]: time="2026-01-24T00:30:52.209511195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:52.210048 containerd[1458]: time="2026-01-24T00:30:52.209630025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:52.241210 systemd[1]: Started cri-containerd-72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec.scope - libcontainer container 72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec. Jan 24 00:30:52.264655 systemd-networkd[1381]: cali61c7879b000: Link UP Jan 24 00:30:52.268794 systemd-networkd[1381]: cali61c7879b000: Gained carrier Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.052 [INFO][4210] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0 coredns-66bc5c9577- kube-system 896b4aca-6a31-459c-a1e1-1b5e3edbde9c 951 0 2026-01-24 00:30:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-200-204 coredns-66bc5c9577-qg8z8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali61c7879b000 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Namespace="kube-system" Pod="coredns-66bc5c9577-qg8z8" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.052 [INFO][4210] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Namespace="kube-system" Pod="coredns-66bc5c9577-qg8z8" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.120 [INFO][4231] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" HandleID="k8s-pod-network.6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.126 [INFO][4231] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" HandleID="k8s-pod-network.6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-200-204", "pod":"coredns-66bc5c9577-qg8z8", "timestamp":"2026-01-24 00:30:52.12096948 +0000 UTC"}, Hostname:"172-234-200-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.126 [INFO][4231] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.136 [INFO][4231] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.137 [INFO][4231] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-204' Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.202 [INFO][4231] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" host="172-234-200-204" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.210 [INFO][4231] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-204" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.215 [INFO][4231] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.219 [INFO][4231] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.221 [INFO][4231] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.221 [INFO][4231] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" host="172-234-200-204" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.223 [INFO][4231] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38 Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.232 [INFO][4231] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" host="172-234-200-204" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.239 [INFO][4231] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.196/26] block=192.168.69.192/26 handle="k8s-pod-network.6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" host="172-234-200-204" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.239 [INFO][4231] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.196/26] handle="k8s-pod-network.6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" host="172-234-200-204" Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.239 [INFO][4231] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:52.286271 containerd[1458]: 2026-01-24 00:30:52.239 [INFO][4231] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.196/26] IPv6=[] ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" HandleID="k8s-pod-network.6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:52.286915 containerd[1458]: 2026-01-24 00:30:52.243 [INFO][4210] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Namespace="kube-system" Pod="coredns-66bc5c9577-qg8z8" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"896b4aca-6a31-459c-a1e1-1b5e3edbde9c", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"", Pod:"coredns-66bc5c9577-qg8z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61c7879b000", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:52.286915 containerd[1458]: 2026-01-24 00:30:52.243 [INFO][4210] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.196/32] ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Namespace="kube-system" Pod="coredns-66bc5c9577-qg8z8" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:52.286915 containerd[1458]: 2026-01-24 00:30:52.243 [INFO][4210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61c7879b000 ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Namespace="kube-system" Pod="coredns-66bc5c9577-qg8z8" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:52.286915 containerd[1458]: 2026-01-24 00:30:52.264 [INFO][4210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Namespace="kube-system" Pod="coredns-66bc5c9577-qg8z8" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:52.286915 containerd[1458]: 2026-01-24 00:30:52.265 [INFO][4210] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Namespace="kube-system" Pod="coredns-66bc5c9577-qg8z8" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"896b4aca-6a31-459c-a1e1-1b5e3edbde9c", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38", Pod:"coredns-66bc5c9577-qg8z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61c7879b000", MAC:"d6:fd:6b:c2:b2:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:52.286915 containerd[1458]: 2026-01-24 00:30:52.277 [INFO][4210] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38" Namespace="kube-system" Pod="coredns-66bc5c9577-qg8z8" WorkloadEndpoint="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:30:52.307863 containerd[1458]: time="2026-01-24T00:30:52.307779433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:52.309188 containerd[1458]: time="2026-01-24T00:30:52.309130224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:52.309396 containerd[1458]: time="2026-01-24T00:30:52.309372785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:52.309681 containerd[1458]: time="2026-01-24T00:30:52.309647025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:52.332355 containerd[1458]: time="2026-01-24T00:30:52.332324247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94fb7866c-6mcp2,Uid:a484550e-d179-4ca3-a2ad-d4ef7f1868f9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec\"" Jan 24 00:30:52.335439 containerd[1458]: time="2026-01-24T00:30:52.334873021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:30:52.341630 systemd[1]: Started cri-containerd-6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38.scope - libcontainer container 6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38. Jan 24 00:30:52.389875 containerd[1458]: time="2026-01-24T00:30:52.389386857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qg8z8,Uid:896b4aca-6a31-459c-a1e1-1b5e3edbde9c,Namespace:kube-system,Attempt:1,} returns sandbox id \"6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38\"" Jan 24 00:30:52.391127 kubelet[2533]: E0124 00:30:52.391043 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:52.396690 containerd[1458]: time="2026-01-24T00:30:52.396521117Z" level=info msg="CreateContainer within sandbox \"6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:30:52.405766 containerd[1458]: time="2026-01-24T00:30:52.405732700Z" level=info msg="CreateContainer within sandbox \"6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c48213b42eaacb0e3d41c8ffffb0001c554d0813140478bb11bb200d0eb7f60\"" Jan 24 00:30:52.407330 containerd[1458]: time="2026-01-24T00:30:52.407262392Z" level=info msg="StartContainer for \"8c48213b42eaacb0e3d41c8ffffb0001c554d0813140478bb11bb200d0eb7f60\"" Jan 24 00:30:52.443620 systemd[1]: Started cri-containerd-8c48213b42eaacb0e3d41c8ffffb0001c554d0813140478bb11bb200d0eb7f60.scope - libcontainer container 8c48213b42eaacb0e3d41c8ffffb0001c554d0813140478bb11bb200d0eb7f60. Jan 24 00:30:52.468199 containerd[1458]: time="2026-01-24T00:30:52.468158438Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:52.468892 containerd[1458]: time="2026-01-24T00:30:52.468834959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:30:52.469088 containerd[1458]: time="2026-01-24T00:30:52.468882519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:30:52.469437 kubelet[2533]: E0124 00:30:52.469247 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:52.469437 kubelet[2533]: E0124 00:30:52.469289 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:52.469437 kubelet[2533]: E0124 00:30:52.469363 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-94fb7866c-6mcp2_calico-apiserver(a484550e-d179-4ca3-a2ad-d4ef7f1868f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:52.469437 kubelet[2533]: E0124 00:30:52.469394 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:30:52.480208 containerd[1458]: time="2026-01-24T00:30:52.479892114Z" level=info msg="StartContainer for \"8c48213b42eaacb0e3d41c8ffffb0001c554d0813140478bb11bb200d0eb7f60\" returns successfully" Jan 24 00:30:53.010390 kubelet[2533]: E0124 00:30:53.010189 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:30:53.022015 kubelet[2533]: E0124 00:30:53.021956 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:53.023176 kubelet[2533]: E0124 00:30:53.023047 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:53.045133 kubelet[2533]: I0124 00:30:53.045067 2533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qg8z8" podStartSLOduration=32.045053343 podStartE2EDuration="32.045053343s" podCreationTimestamp="2026-01-24 00:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:30:53.043896132 +0000 UTC m=+38.392646555" watchObservedRunningTime="2026-01-24 00:30:53.045053343 +0000 UTC m=+38.393803766" Jan 24 00:30:53.815536 containerd[1458]: time="2026-01-24T00:30:53.814408685Z" level=info msg="StopPodSandbox for \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\"" Jan 24 00:30:53.815536 containerd[1458]: time="2026-01-24T00:30:53.814952496Z" level=info msg="StopPodSandbox for \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\"" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.895 [INFO][4400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.896 [INFO][4400] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" iface="eth0" netns="/var/run/netns/cni-ad3927b0-93c4-5612-2ad1-5c3a86cff2ae" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.897 [INFO][4400] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" iface="eth0" netns="/var/run/netns/cni-ad3927b0-93c4-5612-2ad1-5c3a86cff2ae" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.898 [INFO][4400] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" iface="eth0" netns="/var/run/netns/cni-ad3927b0-93c4-5612-2ad1-5c3a86cff2ae" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.898 [INFO][4400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.898 [INFO][4400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.941 [INFO][4418] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" HandleID="k8s-pod-network.2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.942 [INFO][4418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.942 [INFO][4418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.949 [WARNING][4418] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" HandleID="k8s-pod-network.2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.949 [INFO][4418] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" HandleID="k8s-pod-network.2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.950 [INFO][4418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:53.959706 containerd[1458]: 2026-01-24 00:30:53.955 [INFO][4400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:30:53.959706 containerd[1458]: time="2026-01-24T00:30:53.959026015Z" level=info msg="TearDown network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\" successfully" Jan 24 00:30:53.959706 containerd[1458]: time="2026-01-24T00:30:53.959086435Z" level=info msg="StopPodSandbox for \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\" returns successfully" Jan 24 00:30:53.961594 systemd[1]: run-netns-cni\x2dad3927b0\x2d93c4\x2d5612\x2d2ad1\x2d5c3a86cff2ae.mount: Deactivated successfully. Jan 24 00:30:53.963253 containerd[1458]: time="2026-01-24T00:30:53.963189481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4dbbbd84-pgmv7,Uid:6177d0af-c7ec-41af-a5e7-d14d37e79e3f,Namespace:calico-system,Attempt:1,}" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.887 [INFO][4397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.887 [INFO][4397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" iface="eth0" netns="/var/run/netns/cni-39cb8ed1-837c-6029-1fdf-243049757310" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.889 [INFO][4397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" iface="eth0" netns="/var/run/netns/cni-39cb8ed1-837c-6029-1fdf-243049757310" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.891 [INFO][4397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" iface="eth0" netns="/var/run/netns/cni-39cb8ed1-837c-6029-1fdf-243049757310" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.891 [INFO][4397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.891 [INFO][4397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.946 [INFO][4412] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" HandleID="k8s-pod-network.3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.947 [INFO][4412] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.950 [INFO][4412] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.968 [WARNING][4412] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" HandleID="k8s-pod-network.3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.968 [INFO][4412] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" HandleID="k8s-pod-network.3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.971 [INFO][4412] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:53.992820 containerd[1458]: 2026-01-24 00:30:53.987 [INFO][4397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:30:53.997036 containerd[1458]: time="2026-01-24T00:30:53.992948150Z" level=info msg="TearDown network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\" successfully" Jan 24 00:30:53.997036 containerd[1458]: time="2026-01-24T00:30:53.992969790Z" level=info msg="StopPodSandbox for \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\" returns successfully" Jan 24 00:30:53.999364 systemd[1]: run-netns-cni\x2d39cb8ed1\x2d837c\x2d6029\x2d1fdf\x2d243049757310.mount: Deactivated successfully. Jan 24 00:30:54.001507 containerd[1458]: time="2026-01-24T00:30:54.001162461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-484vf,Uid:974bf216-052b-49fa-b0ab-b6a46ee1fdcb,Namespace:calico-system,Attempt:1,}" Jan 24 00:30:54.025624 kubelet[2533]: E0124 00:30:54.024446 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:54.029481 kubelet[2533]: E0124 00:30:54.029456 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:54.034068 kubelet[2533]: E0124 00:30:54.032267 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:30:54.124464 systemd-networkd[1381]: calib9b32692612: Link UP Jan 24 00:30:54.129614 systemd-networkd[1381]: calib9b32692612: Gained carrier Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.056 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--204-k8s-csi--node--driver--484vf-eth0 csi-node-driver- calico-system 974bf216-052b-49fa-b0ab-b6a46ee1fdcb 995 0 2026-01-24 00:30:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-200-204 csi-node-driver-484vf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib9b32692612 [] [] }} ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Namespace="calico-system" Pod="csi-node-driver-484vf" WorkloadEndpoint="172--234--200--204-k8s-csi--node--driver--484vf-" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.056 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Namespace="calico-system" Pod="csi-node-driver-484vf" WorkloadEndpoint="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.082 [INFO][4451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" HandleID="k8s-pod-network.2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.082 [INFO][4451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" HandleID="k8s-pod-network.2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-204", "pod":"csi-node-driver-484vf", "timestamp":"2026-01-24 00:30:54.082515211 +0000 UTC"}, Hostname:"172-234-200-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.082 [INFO][4451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.082 [INFO][4451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.082 [INFO][4451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-204' Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.088 [INFO][4451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" host="172-234-200-204" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.092 [INFO][4451] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-204" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.096 [INFO][4451] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.097 [INFO][4451] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.099 [INFO][4451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.099 [INFO][4451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" host="172-234-200-204" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.101 [INFO][4451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.105 [INFO][4451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" host="172-234-200-204" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.111 [INFO][4451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.197/26] block=192.168.69.192/26 handle="k8s-pod-network.2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" host="172-234-200-204" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.111 [INFO][4451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.197/26] handle="k8s-pod-network.2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" host="172-234-200-204" Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.111 [INFO][4451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:54.143525 containerd[1458]: 2026-01-24 00:30:54.111 [INFO][4451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.197/26] IPv6=[] ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" HandleID="k8s-pod-network.2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:54.144210 containerd[1458]: 2026-01-24 00:30:54.117 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Namespace="calico-system" Pod="csi-node-driver-484vf" WorkloadEndpoint="172--234--200--204-k8s-csi--node--driver--484vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-csi--node--driver--484vf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"974bf216-052b-49fa-b0ab-b6a46ee1fdcb", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"", Pod:"csi-node-driver-484vf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9b32692612", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:54.144210 containerd[1458]: 2026-01-24 00:30:54.117 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.197/32] ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Namespace="calico-system" Pod="csi-node-driver-484vf" WorkloadEndpoint="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:54.144210 containerd[1458]: 2026-01-24 00:30:54.117 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9b32692612 ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Namespace="calico-system" Pod="csi-node-driver-484vf" WorkloadEndpoint="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:54.144210 containerd[1458]: 2026-01-24 00:30:54.128 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Namespace="calico-system" Pod="csi-node-driver-484vf" WorkloadEndpoint="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:54.144210 containerd[1458]: 2026-01-24 00:30:54.128 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Namespace="calico-system" Pod="csi-node-driver-484vf" WorkloadEndpoint="172--234--200--204-k8s-csi--node--driver--484vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-csi--node--driver--484vf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"974bf216-052b-49fa-b0ab-b6a46ee1fdcb", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a", Pod:"csi-node-driver-484vf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9b32692612", MAC:"4a:55:84:f7:41:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:54.144210 containerd[1458]: 2026-01-24 00:30:54.139 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a" Namespace="calico-system" Pod="csi-node-driver-484vf" WorkloadEndpoint="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:30:54.164951 systemd-networkd[1381]: cali52247adce68: Gained IPv6LL Jan 24 00:30:54.169611 containerd[1458]: time="2026-01-24T00:30:54.169462898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:54.170051 containerd[1458]: time="2026-01-24T00:30:54.169912649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:54.171656 containerd[1458]: time="2026-01-24T00:30:54.171495490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:54.172291 containerd[1458]: time="2026-01-24T00:30:54.171890231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:54.197650 systemd[1]: Started cri-containerd-2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a.scope - libcontainer container 2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a. Jan 24 00:30:54.228269 systemd-networkd[1381]: cali61c7879b000: Gained IPv6LL Jan 24 00:30:54.235643 systemd-networkd[1381]: calicdf24e18151: Link UP Jan 24 00:30:54.236798 systemd-networkd[1381]: calicdf24e18151: Gained carrier Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.057 [INFO][4427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0 calico-kube-controllers-7d4dbbbd84- calico-system 6177d0af-c7ec-41af-a5e7-d14d37e79e3f 996 0 2026-01-24 00:30:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d4dbbbd84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-200-204 calico-kube-controllers-7d4dbbbd84-pgmv7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicdf24e18151 [] [] }} ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Namespace="calico-system" Pod="calico-kube-controllers-7d4dbbbd84-pgmv7" WorkloadEndpoint="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.057 [INFO][4427] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Namespace="calico-system" Pod="calico-kube-controllers-7d4dbbbd84-pgmv7" WorkloadEndpoint="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.091 [INFO][4453] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" HandleID="k8s-pod-network.05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.091 [INFO][4453] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" HandleID="k8s-pod-network.05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-204", "pod":"calico-kube-controllers-7d4dbbbd84-pgmv7", "timestamp":"2026-01-24 00:30:54.091308562 +0000 UTC"}, Hostname:"172-234-200-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.091 [INFO][4453] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.112 [INFO][4453] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.112 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-204' Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.192 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" host="172-234-200-204" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.200 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-204" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.205 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.206 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.208 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.209 [INFO][4453] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" host="172-234-200-204" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.210 [INFO][4453] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9 Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.214 [INFO][4453] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" host="172-234-200-204" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.221 [INFO][4453] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.198/26] block=192.168.69.192/26 handle="k8s-pod-network.05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" host="172-234-200-204" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.221 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.198/26] handle="k8s-pod-network.05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" host="172-234-200-204" Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.221 [INFO][4453] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:54.255884 containerd[1458]: 2026-01-24 00:30:54.221 [INFO][4453] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.198/26] IPv6=[] ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" HandleID="k8s-pod-network.05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:54.256627 containerd[1458]: 2026-01-24 00:30:54.227 [INFO][4427] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Namespace="calico-system" Pod="calico-kube-controllers-7d4dbbbd84-pgmv7" WorkloadEndpoint="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0", GenerateName:"calico-kube-controllers-7d4dbbbd84-", Namespace:"calico-system", SelfLink:"", UID:"6177d0af-c7ec-41af-a5e7-d14d37e79e3f", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4dbbbd84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"", Pod:"calico-kube-controllers-7d4dbbbd84-pgmv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdf24e18151", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:54.256627 containerd[1458]: 2026-01-24 00:30:54.227 [INFO][4427] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.198/32] ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Namespace="calico-system" Pod="calico-kube-controllers-7d4dbbbd84-pgmv7" WorkloadEndpoint="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:54.256627 containerd[1458]: 2026-01-24 00:30:54.227 [INFO][4427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdf24e18151 ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Namespace="calico-system" Pod="calico-kube-controllers-7d4dbbbd84-pgmv7" WorkloadEndpoint="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:54.256627 containerd[1458]: 2026-01-24 00:30:54.237 [INFO][4427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Namespace="calico-system" Pod="calico-kube-controllers-7d4dbbbd84-pgmv7" WorkloadEndpoint="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:54.256627 containerd[1458]: 2026-01-24 00:30:54.238 [INFO][4427] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Namespace="calico-system" Pod="calico-kube-controllers-7d4dbbbd84-pgmv7" WorkloadEndpoint="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0", GenerateName:"calico-kube-controllers-7d4dbbbd84-", Namespace:"calico-system", SelfLink:"", UID:"6177d0af-c7ec-41af-a5e7-d14d37e79e3f", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4dbbbd84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9", Pod:"calico-kube-controllers-7d4dbbbd84-pgmv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdf24e18151", MAC:"8a:ca:c1:2d:e3:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:54.256627 containerd[1458]: 2026-01-24 00:30:54.253 [INFO][4427] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9" Namespace="calico-system" Pod="calico-kube-controllers-7d4dbbbd84-pgmv7" WorkloadEndpoint="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:30:54.260973 containerd[1458]: time="2026-01-24T00:30:54.259307929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-484vf,Uid:974bf216-052b-49fa-b0ab-b6a46ee1fdcb,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a\"" Jan 24 00:30:54.263742 containerd[1458]: time="2026-01-24T00:30:54.263714604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:30:54.283012 containerd[1458]: time="2026-01-24T00:30:54.282925868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:54.283113 containerd[1458]: time="2026-01-24T00:30:54.283055778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:54.283186 containerd[1458]: time="2026-01-24T00:30:54.283141948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:54.283429 containerd[1458]: time="2026-01-24T00:30:54.283340858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:54.307172 systemd[1]: Started cri-containerd-05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9.scope - libcontainer container 05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9. Jan 24 00:30:54.349183 containerd[1458]: time="2026-01-24T00:30:54.349070049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4dbbbd84-pgmv7,Uid:6177d0af-c7ec-41af-a5e7-d14d37e79e3f,Namespace:calico-system,Attempt:1,} returns sandbox id \"05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9\"" Jan 24 00:30:54.390820 containerd[1458]: time="2026-01-24T00:30:54.390102630Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:54.391215 containerd[1458]: time="2026-01-24T00:30:54.391189351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:30:54.391379 containerd[1458]: time="2026-01-24T00:30:54.391272681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:30:54.391888 kubelet[2533]: E0124 00:30:54.391504 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:54.391888 kubelet[2533]: E0124 00:30:54.391539 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:30:54.391888 kubelet[2533]: E0124 00:30:54.391658 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:54.392162 containerd[1458]: time="2026-01-24T00:30:54.392128782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:30:54.519624 containerd[1458]: time="2026-01-24T00:30:54.519558700Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:54.520363 containerd[1458]: time="2026-01-24T00:30:54.520313130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:30:54.520475 containerd[1458]: time="2026-01-24T00:30:54.520326130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:30:54.520564 kubelet[2533]: E0124 00:30:54.520533 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:30:54.520616 kubelet[2533]: E0124 00:30:54.520572 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:30:54.520746 kubelet[2533]: E0124 00:30:54.520721 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d4dbbbd84-pgmv7_calico-system(6177d0af-c7ec-41af-a5e7-d14d37e79e3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:54.520854 kubelet[2533]: E0124 00:30:54.520761 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:30:54.522149 containerd[1458]: time="2026-01-24T00:30:54.522123583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:30:54.660136 containerd[1458]: time="2026-01-24T00:30:54.659964533Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:54.661041 containerd[1458]: time="2026-01-24T00:30:54.660985554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:30:54.661152 containerd[1458]: time="2026-01-24T00:30:54.661104464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:30:54.661224 kubelet[2533]: E0124 00:30:54.661188 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:54.661267 kubelet[2533]: E0124 00:30:54.661229 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:30:54.661310 kubelet[2533]: E0124 00:30:54.661288 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:54.661374 kubelet[2533]: E0124 00:30:54.661331 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:30:54.817071 containerd[1458]: time="2026-01-24T00:30:54.814923574Z" level=info msg="StopPodSandbox for \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\"" Jan 24 00:30:54.817071 containerd[1458]: time="2026-01-24T00:30:54.815852615Z" level=info msg="StopPodSandbox for \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\"" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.867 [INFO][4578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.869 [INFO][4578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" iface="eth0" netns="/var/run/netns/cni-1702845c-1f8d-d092-39c5-3a86e02e9a65" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.870 [INFO][4578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" iface="eth0" netns="/var/run/netns/cni-1702845c-1f8d-d092-39c5-3a86e02e9a65" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.871 [INFO][4578] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" iface="eth0" netns="/var/run/netns/cni-1702845c-1f8d-d092-39c5-3a86e02e9a65" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.871 [INFO][4578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.871 [INFO][4578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.908 [INFO][4596] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" HandleID="k8s-pod-network.9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.908 [INFO][4596] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.908 [INFO][4596] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.914 [WARNING][4596] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" HandleID="k8s-pod-network.9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.914 [INFO][4596] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" HandleID="k8s-pod-network.9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.915 [INFO][4596] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:54.918892 containerd[1458]: 2026-01-24 00:30:54.917 [INFO][4578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:30:54.920605 containerd[1458]: time="2026-01-24T00:30:54.920482074Z" level=info msg="TearDown network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\" successfully" Jan 24 00:30:54.920605 containerd[1458]: time="2026-01-24T00:30:54.920512414Z" level=info msg="StopPodSandbox for \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\" returns successfully" Jan 24 00:30:54.922966 containerd[1458]: time="2026-01-24T00:30:54.922936427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94fb7866c-2j9nd,Uid:7a9ec77a-a441-4797-ab37-24de3d316a35,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.869 [INFO][4585] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.869 [INFO][4585] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" iface="eth0" netns="/var/run/netns/cni-131e044a-5a92-f32e-a341-c2da80419e0a" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.870 [INFO][4585] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" iface="eth0" netns="/var/run/netns/cni-131e044a-5a92-f32e-a341-c2da80419e0a" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.870 [INFO][4585] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" iface="eth0" netns="/var/run/netns/cni-131e044a-5a92-f32e-a341-c2da80419e0a" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.870 [INFO][4585] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.870 [INFO][4585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.906 [INFO][4594] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" HandleID="k8s-pod-network.5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.909 [INFO][4594] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.915 [INFO][4594] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.921 [WARNING][4594] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" HandleID="k8s-pod-network.5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.921 [INFO][4594] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" HandleID="k8s-pod-network.5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.923 [INFO][4594] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:54.930454 containerd[1458]: 2026-01-24 00:30:54.927 [INFO][4585] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:30:54.930454 containerd[1458]: time="2026-01-24T00:30:54.929637065Z" level=info msg="TearDown network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\" successfully" Jan 24 00:30:54.930454 containerd[1458]: time="2026-01-24T00:30:54.929656355Z" level=info msg="StopPodSandbox for \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\" returns successfully" Jan 24 00:30:54.931458 containerd[1458]: time="2026-01-24T00:30:54.931269617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pq45k,Uid:abb28b3a-6878-432c-ab4c-0e09969f7334,Namespace:calico-system,Attempt:1,}" Jan 24 00:30:54.972392 systemd[1]: run-netns-cni\x2d1702845c\x2d1f8d\x2dd092\x2d39c5\x2d3a86e02e9a65.mount: Deactivated successfully. Jan 24 00:30:54.972495 systemd[1]: run-netns-cni\x2d131e044a\x2d5a92\x2df32e\x2da341\x2dc2da80419e0a.mount: Deactivated successfully. Jan 24 00:30:55.047748 kubelet[2533]: E0124 00:30:55.045732 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:30:55.059447 kubelet[2533]: E0124 00:30:55.058065 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:55.062116 systemd-networkd[1381]: califb0bb91ddbf: Link UP Jan 24 00:30:55.064365 systemd-networkd[1381]: califb0bb91ddbf: Gained carrier Jan 24 00:30:55.071073 kubelet[2533]: E0124 00:30:55.070988 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:54.965 [INFO][4608] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0 calico-apiserver-94fb7866c- calico-apiserver 7a9ec77a-a441-4797-ab37-24de3d316a35 1020 0 2026-01-24 00:30:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94fb7866c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-200-204 calico-apiserver-94fb7866c-2j9nd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb0bb91ddbf [] [] }} ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-2j9nd" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:54.965 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-2j9nd" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.007 [INFO][4633] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" HandleID="k8s-pod-network.868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.007 [INFO][4633] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" HandleID="k8s-pod-network.868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-234-200-204", "pod":"calico-apiserver-94fb7866c-2j9nd", "timestamp":"2026-01-24 00:30:55.007564011 +0000 UTC"}, Hostname:"172-234-200-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.007 [INFO][4633] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.008 [INFO][4633] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.008 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-204' Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.014 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" host="172-234-200-204" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.018 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-204" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.023 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.025 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.027 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.027 [INFO][4633] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" host="172-234-200-204" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.029 [INFO][4633] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209 Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.033 [INFO][4633] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" host="172-234-200-204" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.043 [INFO][4633] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.199/26] block=192.168.69.192/26 handle="k8s-pod-network.868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" host="172-234-200-204" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.043 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.199/26] handle="k8s-pod-network.868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" host="172-234-200-204" Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.043 [INFO][4633] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:55.097289 containerd[1458]: 2026-01-24 00:30:55.043 [INFO][4633] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.199/26] IPv6=[] ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" HandleID="k8s-pod-network.868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:55.098106 containerd[1458]: 2026-01-24 00:30:55.053 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-2j9nd" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0", GenerateName:"calico-apiserver-94fb7866c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a9ec77a-a441-4797-ab37-24de3d316a35", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94fb7866c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"", Pod:"calico-apiserver-94fb7866c-2j9nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0bb91ddbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:55.098106 containerd[1458]: 2026-01-24 00:30:55.054 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.199/32] ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-2j9nd" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:55.098106 containerd[1458]: 2026-01-24 00:30:55.054 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb0bb91ddbf ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-2j9nd" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:55.098106 containerd[1458]: 2026-01-24 00:30:55.065 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-2j9nd" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:55.098106 containerd[1458]: 2026-01-24 00:30:55.065 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-2j9nd" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0", GenerateName:"calico-apiserver-94fb7866c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a9ec77a-a441-4797-ab37-24de3d316a35", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94fb7866c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209", Pod:"calico-apiserver-94fb7866c-2j9nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0bb91ddbf", MAC:"da:d1:46:97:1d:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:55.098106 containerd[1458]: 2026-01-24 00:30:55.094 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209" Namespace="calico-apiserver" Pod="calico-apiserver-94fb7866c-2j9nd" WorkloadEndpoint="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:30:55.123519 containerd[1458]: time="2026-01-24T00:30:55.123393595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:55.124033 containerd[1458]: time="2026-01-24T00:30:55.123806815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:55.124033 containerd[1458]: time="2026-01-24T00:30:55.123836735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:55.124033 containerd[1458]: time="2026-01-24T00:30:55.123920845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:55.160412 systemd-networkd[1381]: cali382fc6e3943: Link UP Jan 24 00:30:55.160646 systemd-networkd[1381]: cali382fc6e3943: Gained carrier Jan 24 00:30:55.168158 systemd[1]: Started cri-containerd-868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209.scope - libcontainer container 868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209. Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:54.981 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0 goldmane-7c778bb748- calico-system abb28b3a-6878-432c-ab4c-0e09969f7334 1021 0 2026-01-24 00:30:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-200-204 goldmane-7c778bb748-pq45k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali382fc6e3943 [] [] }} ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Namespace="calico-system" Pod="goldmane-7c778bb748-pq45k" WorkloadEndpoint="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:54.982 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Namespace="calico-system" Pod="goldmane-7c778bb748-pq45k" WorkloadEndpoint="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.073 [INFO][4639] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" HandleID="k8s-pod-network.d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.076 [INFO][4639] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" HandleID="k8s-pod-network.d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf770), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-200-204", "pod":"goldmane-7c778bb748-pq45k", "timestamp":"2026-01-24 00:30:55.073350917 +0000 UTC"}, Hostname:"172-234-200-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.077 [INFO][4639] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.077 [INFO][4639] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.077 [INFO][4639] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-200-204' Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.115 [INFO][4639] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" host="172-234-200-204" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.120 [INFO][4639] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-200-204" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.126 [INFO][4639] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.129 [INFO][4639] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.132 [INFO][4639] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="172-234-200-204" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.132 [INFO][4639] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" host="172-234-200-204" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.134 [INFO][4639] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5 Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.141 [INFO][4639] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" host="172-234-200-204" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.153 [INFO][4639] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.200/26] block=192.168.69.192/26 handle="k8s-pod-network.d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" host="172-234-200-204" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.154 [INFO][4639] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.200/26] handle="k8s-pod-network.d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" host="172-234-200-204" Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.154 [INFO][4639] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:30:55.198110 containerd[1458]: 2026-01-24 00:30:55.154 [INFO][4639] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.200/26] IPv6=[] ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" HandleID="k8s-pod-network.d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:55.199327 containerd[1458]: 2026-01-24 00:30:55.156 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Namespace="calico-system" Pod="goldmane-7c778bb748-pq45k" WorkloadEndpoint="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"abb28b3a-6878-432c-ab4c-0e09969f7334", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"", Pod:"goldmane-7c778bb748-pq45k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali382fc6e3943", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:55.199327 containerd[1458]: 2026-01-24 00:30:55.157 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.200/32] ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Namespace="calico-system" Pod="goldmane-7c778bb748-pq45k" WorkloadEndpoint="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:55.199327 containerd[1458]: 2026-01-24 00:30:55.157 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali382fc6e3943 ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Namespace="calico-system" Pod="goldmane-7c778bb748-pq45k" WorkloadEndpoint="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:55.199327 containerd[1458]: 2026-01-24 00:30:55.164 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Namespace="calico-system" Pod="goldmane-7c778bb748-pq45k" WorkloadEndpoint="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:55.199327 containerd[1458]: 2026-01-24 00:30:55.171 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Namespace="calico-system" Pod="goldmane-7c778bb748-pq45k" WorkloadEndpoint="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"abb28b3a-6878-432c-ab4c-0e09969f7334", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5", Pod:"goldmane-7c778bb748-pq45k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali382fc6e3943", MAC:"2e:c2:da:f5:86:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:30:55.199327 containerd[1458]: 2026-01-24 00:30:55.183 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5" Namespace="calico-system" Pod="goldmane-7c778bb748-pq45k" WorkloadEndpoint="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:30:55.226388 containerd[1458]: time="2026-01-24T00:30:55.226199763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:30:55.226388 containerd[1458]: time="2026-01-24T00:30:55.226243463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:30:55.226388 containerd[1458]: time="2026-01-24T00:30:55.226256393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:55.226388 containerd[1458]: time="2026-01-24T00:30:55.226329633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:30:55.250948 containerd[1458]: time="2026-01-24T00:30:55.249920481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94fb7866c-2j9nd,Uid:7a9ec77a-a441-4797-ab37-24de3d316a35,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209\"" Jan 24 00:30:55.255286 containerd[1458]: time="2026-01-24T00:30:55.255237107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:30:55.257364 systemd[1]: Started cri-containerd-d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5.scope - libcontainer container d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5. Jan 24 00:30:55.306594 containerd[1458]: time="2026-01-24T00:30:55.306517416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pq45k,Uid:abb28b3a-6878-432c-ab4c-0e09969f7334,Namespace:calico-system,Attempt:1,} returns sandbox id \"d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5\"" Jan 24 00:30:55.379829 containerd[1458]: time="2026-01-24T00:30:55.379677071Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:55.380638 containerd[1458]: time="2026-01-24T00:30:55.380534722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:30:55.380638 containerd[1458]: time="2026-01-24T00:30:55.380600612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:30:55.382459 kubelet[2533]: E0124 00:30:55.380815 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:55.382459 kubelet[2533]: E0124 00:30:55.380852 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:30:55.382459 kubelet[2533]: E0124 00:30:55.380985 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-94fb7866c-2j9nd_calico-apiserver(7a9ec77a-a441-4797-ab37-24de3d316a35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:55.382459 kubelet[2533]: E0124 00:30:55.381032 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:30:55.382678 containerd[1458]: time="2026-01-24T00:30:55.382305104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:30:55.447843 kubelet[2533]: I0124 00:30:55.446878 2533 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:30:55.447843 kubelet[2533]: E0124 00:30:55.447266 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:55.525376 containerd[1458]: time="2026-01-24T00:30:55.525239349Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:30:55.528654 containerd[1458]: time="2026-01-24T00:30:55.528126392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:30:55.528654 containerd[1458]: time="2026-01-24T00:30:55.528198392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:30:55.528764 kubelet[2533]: E0124 00:30:55.528541 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:55.528764 kubelet[2533]: E0124 00:30:55.528568 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:30:55.528764 kubelet[2533]: E0124 00:30:55.528626 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pq45k_calico-system(abb28b3a-6878-432c-ab4c-0e09969f7334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:30:55.528764 kubelet[2533]: E0124 00:30:55.528653 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:30:55.764181 systemd-networkd[1381]: calib9b32692612: Gained IPv6LL Jan 24 00:30:55.828215 systemd-networkd[1381]: calicdf24e18151: Gained IPv6LL Jan 24 00:30:56.063858 kubelet[2533]: E0124 00:30:56.062490 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:30:56.068396 kubelet[2533]: E0124 00:30:56.067661 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:30:56.070213 kubelet[2533]: E0124 00:30:56.070166 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:30:56.070375 kubelet[2533]: E0124 00:30:56.070269 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:30:56.070810 kubelet[2533]: E0124 00:30:56.070760 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:30:56.405242 systemd-networkd[1381]: califb0bb91ddbf: Gained IPv6LL Jan 24 00:30:57.069381 kubelet[2533]: E0124 00:30:57.069335 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:30:57.069828 kubelet[2533]: E0124 00:30:57.069685 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:30:57.108429 systemd-networkd[1381]: cali382fc6e3943: Gained IPv6LL Jan 24 00:31:01.814471 containerd[1458]: time="2026-01-24T00:31:01.814424668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:31:01.951471 containerd[1458]: time="2026-01-24T00:31:01.951427315Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:01.952279 containerd[1458]: time="2026-01-24T00:31:01.952244386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:31:01.952378 containerd[1458]: time="2026-01-24T00:31:01.952329386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:31:01.952747 kubelet[2533]: E0124 00:31:01.952477 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:31:01.952747 kubelet[2533]: E0124 00:31:01.952532 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:31:01.952747 kubelet[2533]: E0124 00:31:01.952617 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86547fc664-566mp_calico-system(1f8681ee-3380-4dd8-9bb7-c40be678fb1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:01.954529 containerd[1458]: time="2026-01-24T00:31:01.954504318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:31:02.084312 containerd[1458]: time="2026-01-24T00:31:02.084171105Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:02.085178 containerd[1458]: time="2026-01-24T00:31:02.085080696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:31:02.085266 containerd[1458]: time="2026-01-24T00:31:02.085104956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:31:02.085735 kubelet[2533]: E0124 00:31:02.085688 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:31:02.085785 kubelet[2533]: E0124 00:31:02.085736 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:31:02.085840 kubelet[2533]: E0124 00:31:02.085809 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86547fc664-566mp_calico-system(1f8681ee-3380-4dd8-9bb7-c40be678fb1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:02.085947 kubelet[2533]: E0124 00:31:02.085873 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:31:07.816283 containerd[1458]: time="2026-01-24T00:31:07.815828314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:31:08.105614 containerd[1458]: time="2026-01-24T00:31:08.105083624Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:08.106351 containerd[1458]: time="2026-01-24T00:31:08.106192325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:31:08.106351 containerd[1458]: time="2026-01-24T00:31:08.106276755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:08.106802 kubelet[2533]: E0124 00:31:08.106746 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:31:08.107445 kubelet[2533]: E0124 00:31:08.106976 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:31:08.108618 kubelet[2533]: E0124 00:31:08.107862 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pq45k_calico-system(abb28b3a-6878-432c-ab4c-0e09969f7334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:08.108618 kubelet[2533]: E0124 00:31:08.107906 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:31:08.109344 containerd[1458]: time="2026-01-24T00:31:08.108922236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:31:08.243052 containerd[1458]: time="2026-01-24T00:31:08.241835482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:08.243052 containerd[1458]: time="2026-01-24T00:31:08.242676353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:31:08.243052 containerd[1458]: time="2026-01-24T00:31:08.242796543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:08.243330 kubelet[2533]: E0124 00:31:08.243164 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:08.243330 kubelet[2533]: E0124 00:31:08.243247 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:08.243419 kubelet[2533]: E0124 00:31:08.243393 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-94fb7866c-6mcp2_calico-apiserver(a484550e-d179-4ca3-a2ad-d4ef7f1868f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:08.243448 kubelet[2533]: E0124 00:31:08.243428 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:31:08.816366 containerd[1458]: time="2026-01-24T00:31:08.816150220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:31:08.950440 containerd[1458]: time="2026-01-24T00:31:08.950375317Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:08.951279 containerd[1458]: time="2026-01-24T00:31:08.951183587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:31:08.951279 containerd[1458]: time="2026-01-24T00:31:08.951240087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:31:08.951482 kubelet[2533]: E0124 00:31:08.951424 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:31:08.951545 kubelet[2533]: E0124 00:31:08.951494 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:31:08.951599 kubelet[2533]: E0124 00:31:08.951580 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d4dbbbd84-pgmv7_calico-system(6177d0af-c7ec-41af-a5e7-d14d37e79e3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:08.951686 kubelet[2533]: E0124 00:31:08.951621 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:31:10.820840 containerd[1458]: time="2026-01-24T00:31:10.820612930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:31:10.952040 containerd[1458]: time="2026-01-24T00:31:10.951972698Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:10.952940 containerd[1458]: time="2026-01-24T00:31:10.952908478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:31:10.953091 containerd[1458]: time="2026-01-24T00:31:10.952981158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:31:10.953766 kubelet[2533]: E0124 00:31:10.953234 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:31:10.953766 kubelet[2533]: E0124 00:31:10.953280 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:31:10.953766 kubelet[2533]: E0124 00:31:10.953349 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:10.956656 containerd[1458]: time="2026-01-24T00:31:10.956135729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:31:11.089391 containerd[1458]: time="2026-01-24T00:31:11.089247525Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:11.090226 containerd[1458]: time="2026-01-24T00:31:11.090190346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:31:11.090321 containerd[1458]: time="2026-01-24T00:31:11.090263286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:31:11.090537 kubelet[2533]: E0124 00:31:11.090497 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:31:11.090609 kubelet[2533]: E0124 00:31:11.090546 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:31:11.090652 kubelet[2533]: E0124 00:31:11.090621 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:11.090733 kubelet[2533]: E0124 00:31:11.090687 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:31:11.815876 containerd[1458]: time="2026-01-24T00:31:11.815821464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:31:11.943653 containerd[1458]: time="2026-01-24T00:31:11.943603647Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:11.944764 containerd[1458]: time="2026-01-24T00:31:11.944719197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:31:11.944839 containerd[1458]: time="2026-01-24T00:31:11.944803797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:11.945026 kubelet[2533]: E0124 00:31:11.944970 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:11.945109 kubelet[2533]: E0124 00:31:11.945094 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:11.945270 kubelet[2533]: E0124 00:31:11.945165 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-94fb7866c-2j9nd_calico-apiserver(7a9ec77a-a441-4797-ab37-24de3d316a35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:11.945270 kubelet[2533]: E0124 00:31:11.945195 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:31:14.797622 containerd[1458]: time="2026-01-24T00:31:14.797302809Z" level=info msg="StopPodSandbox for \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\"" Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.846 [WARNING][4821] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-csi--node--driver--484vf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"974bf216-052b-49fa-b0ab-b6a46ee1fdcb", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a", Pod:"csi-node-driver-484vf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9b32692612", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.847 [INFO][4821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.847 [INFO][4821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" iface="eth0" netns="" Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.847 [INFO][4821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.847 [INFO][4821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.883 [INFO][4831] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" HandleID="k8s-pod-network.3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.883 [INFO][4831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.883 [INFO][4831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.890 [WARNING][4831] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" HandleID="k8s-pod-network.3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.890 [INFO][4831] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" HandleID="k8s-pod-network.3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.891 [INFO][4831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:14.898027 containerd[1458]: 2026-01-24 00:31:14.893 [INFO][4821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:31:14.898027 containerd[1458]: time="2026-01-24T00:31:14.897188473Z" level=info msg="TearDown network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\" successfully" Jan 24 00:31:14.898027 containerd[1458]: time="2026-01-24T00:31:14.897207953Z" level=info msg="StopPodSandbox for \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\" returns successfully" Jan 24 00:31:14.898773 containerd[1458]: time="2026-01-24T00:31:14.898746554Z" level=info msg="RemovePodSandbox for \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\"" Jan 24 00:31:14.898849 containerd[1458]: time="2026-01-24T00:31:14.898832164Z" level=info msg="Forcibly stopping sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\"" Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:14.962 [WARNING][4845] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-csi--node--driver--484vf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"974bf216-052b-49fa-b0ab-b6a46ee1fdcb", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"2e933745a057c1cb3f17acdcc59c4efba2818fb07769cfb6f996f38478bc1d1a", Pod:"csi-node-driver-484vf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9b32692612", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:14.964 [INFO][4845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:14.964 [INFO][4845] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" iface="eth0" netns="" Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:14.964 [INFO][4845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:14.964 [INFO][4845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:14.996 [INFO][4853] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" HandleID="k8s-pod-network.3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:14.999 [INFO][4853] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:14.999 [INFO][4853] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:15.009 [WARNING][4853] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" HandleID="k8s-pod-network.3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:15.009 [INFO][4853] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" HandleID="k8s-pod-network.3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Workload="172--234--200--204-k8s-csi--node--driver--484vf-eth0" Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:15.011 [INFO][4853] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.019825 containerd[1458]: 2026-01-24 00:31:15.015 [INFO][4845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1" Jan 24 00:31:15.020246 containerd[1458]: time="2026-01-24T00:31:15.019869734Z" level=info msg="TearDown network for sandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\" successfully" Jan 24 00:31:15.023887 containerd[1458]: time="2026-01-24T00:31:15.023845546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:15.023966 containerd[1458]: time="2026-01-24T00:31:15.023919896Z" level=info msg="RemovePodSandbox \"3a99ab3fd9d7031be42249572a24ceed107174e8ffc3812dac6e918e287b40e1\" returns successfully" Jan 24 00:31:15.025482 containerd[1458]: time="2026-01-24T00:31:15.025451436Z" level=info msg="StopPodSandbox for \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\"" Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.072 [WARNING][4867] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0", GenerateName:"calico-apiserver-94fb7866c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a484550e-d179-4ca3-a2ad-d4ef7f1868f9", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94fb7866c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec", Pod:"calico-apiserver-94fb7866c-6mcp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52247adce68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.073 [INFO][4867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.073 [INFO][4867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" iface="eth0" netns="" Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.073 [INFO][4867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.073 [INFO][4867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.117 [INFO][4874] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" HandleID="k8s-pod-network.0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.117 [INFO][4874] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.118 [INFO][4874] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.124 [WARNING][4874] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" HandleID="k8s-pod-network.0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.124 [INFO][4874] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" HandleID="k8s-pod-network.0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.126 [INFO][4874] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.130511 containerd[1458]: 2026-01-24 00:31:15.128 [INFO][4867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:31:15.131338 containerd[1458]: time="2026-01-24T00:31:15.130557029Z" level=info msg="TearDown network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\" successfully" Jan 24 00:31:15.131338 containerd[1458]: time="2026-01-24T00:31:15.130576659Z" level=info msg="StopPodSandbox for \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\" returns successfully" Jan 24 00:31:15.131338 containerd[1458]: time="2026-01-24T00:31:15.130827920Z" level=info msg="RemovePodSandbox for \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\"" Jan 24 00:31:15.131338 containerd[1458]: time="2026-01-24T00:31:15.130848790Z" level=info msg="Forcibly stopping sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\"" Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.167 [WARNING][4888] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0", GenerateName:"calico-apiserver-94fb7866c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a484550e-d179-4ca3-a2ad-d4ef7f1868f9", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94fb7866c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"72648b4cc6ace204a8bfb6f53c0bfe98bef1451e9fd52ec4041d1d130a46cfec", Pod:"calico-apiserver-94fb7866c-6mcp2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52247adce68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.167 [INFO][4888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.168 [INFO][4888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" iface="eth0" netns="" Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.168 [INFO][4888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.168 [INFO][4888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.192 [INFO][4895] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" HandleID="k8s-pod-network.0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.193 [INFO][4895] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.193 [INFO][4895] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.200 [WARNING][4895] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" HandleID="k8s-pod-network.0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.200 [INFO][4895] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" HandleID="k8s-pod-network.0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--6mcp2-eth0" Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.201 [INFO][4895] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.207272 containerd[1458]: 2026-01-24 00:31:15.205 [INFO][4888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741" Jan 24 00:31:15.207659 containerd[1458]: time="2026-01-24T00:31:15.207327524Z" level=info msg="TearDown network for sandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\" successfully" Jan 24 00:31:15.210161 containerd[1458]: time="2026-01-24T00:31:15.210136285Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:15.210224 containerd[1458]: time="2026-01-24T00:31:15.210179645Z" level=info msg="RemovePodSandbox \"0ddc96158be3503ad6270b38086c75b90b1205e70161aed7d592091183972741\" returns successfully" Jan 24 00:31:15.210684 containerd[1458]: time="2026-01-24T00:31:15.210664265Z" level=info msg="StopPodSandbox for \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\"" Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.248 [WARNING][4909] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"896b4aca-6a31-459c-a1e1-1b5e3edbde9c", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38", Pod:"coredns-66bc5c9577-qg8z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61c7879b000", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.248 [INFO][4909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.248 [INFO][4909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" iface="eth0" netns="" Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.248 [INFO][4909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.248 [INFO][4909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.272 [INFO][4917] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" HandleID="k8s-pod-network.6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.273 [INFO][4917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.273 [INFO][4917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.280 [WARNING][4917] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" HandleID="k8s-pod-network.6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.280 [INFO][4917] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" HandleID="k8s-pod-network.6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.281 [INFO][4917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.286229 containerd[1458]: 2026-01-24 00:31:15.283 [INFO][4909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:31:15.286229 containerd[1458]: time="2026-01-24T00:31:15.286117139Z" level=info msg="TearDown network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\" successfully" Jan 24 00:31:15.286229 containerd[1458]: time="2026-01-24T00:31:15.286141219Z" level=info msg="StopPodSandbox for \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\" returns successfully" Jan 24 00:31:15.287662 containerd[1458]: time="2026-01-24T00:31:15.287235089Z" level=info msg="RemovePodSandbox for \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\"" Jan 24 00:31:15.287662 containerd[1458]: time="2026-01-24T00:31:15.287263009Z" level=info msg="Forcibly stopping sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\"" Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.329 [WARNING][4933] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"896b4aca-6a31-459c-a1e1-1b5e3edbde9c", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"6d7fb6537715443d62ae53e8b688374f7f724b935f3916d46a1d72d497381b38", Pod:"coredns-66bc5c9577-qg8z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61c7879b000", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.329 [INFO][4933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.329 [INFO][4933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" iface="eth0" netns="" Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.329 [INFO][4933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.329 [INFO][4933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.359 [INFO][4941] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" HandleID="k8s-pod-network.6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.359 [INFO][4941] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.360 [INFO][4941] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.368 [WARNING][4941] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" HandleID="k8s-pod-network.6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.368 [INFO][4941] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" HandleID="k8s-pod-network.6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Workload="172--234--200--204-k8s-coredns--66bc5c9577--qg8z8-eth0" Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.370 [INFO][4941] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.374779 containerd[1458]: 2026-01-24 00:31:15.372 [INFO][4933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96" Jan 24 00:31:15.376107 containerd[1458]: time="2026-01-24T00:31:15.374954507Z" level=info msg="TearDown network for sandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\" successfully" Jan 24 00:31:15.378381 containerd[1458]: time="2026-01-24T00:31:15.378350878Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:15.378507 containerd[1458]: time="2026-01-24T00:31:15.378490988Z" level=info msg="RemovePodSandbox \"6304ab832ad901a7745d7d237152e8fd3fef39a9d08bc3105415a3ddd5ca3a96\" returns successfully" Jan 24 00:31:15.379398 containerd[1458]: time="2026-01-24T00:31:15.379370899Z" level=info msg="StopPodSandbox for \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\"" Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.414 [WARNING][4955] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"44c8e029-8edf-43c5-9553-a705dde6d475", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1", Pod:"coredns-66bc5c9577-k6lzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66575f2af4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.414 [INFO][4955] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.414 [INFO][4955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" iface="eth0" netns="" Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.414 [INFO][4955] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.414 [INFO][4955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.445 [INFO][4962] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" HandleID="k8s-pod-network.a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.445 [INFO][4962] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.446 [INFO][4962] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.450 [WARNING][4962] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" HandleID="k8s-pod-network.a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.450 [INFO][4962] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" HandleID="k8s-pod-network.a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.451 [INFO][4962] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.457197 containerd[1458]: 2026-01-24 00:31:15.454 [INFO][4955] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:31:15.458723 containerd[1458]: time="2026-01-24T00:31:15.457852333Z" level=info msg="TearDown network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\" successfully" Jan 24 00:31:15.458723 containerd[1458]: time="2026-01-24T00:31:15.457878653Z" level=info msg="StopPodSandbox for \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\" returns successfully" Jan 24 00:31:15.459474 containerd[1458]: time="2026-01-24T00:31:15.458986044Z" level=info msg="RemovePodSandbox for \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\"" Jan 24 00:31:15.459474 containerd[1458]: time="2026-01-24T00:31:15.459313214Z" level=info msg="Forcibly stopping sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\"" Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.494 [WARNING][4976] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"44c8e029-8edf-43c5-9553-a705dde6d475", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"b17fa8f86bd66286befce45f3a8fb3e243ac98b0767d32625a371eaf3ce982b1", Pod:"coredns-66bc5c9577-k6lzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66575f2af4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.494 [INFO][4976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.495 [INFO][4976] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" iface="eth0" netns="" Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.495 [INFO][4976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.495 [INFO][4976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.517 [INFO][4984] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" HandleID="k8s-pod-network.a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.518 [INFO][4984] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.518 [INFO][4984] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.525 [WARNING][4984] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" HandleID="k8s-pod-network.a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.525 [INFO][4984] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" HandleID="k8s-pod-network.a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Workload="172--234--200--204-k8s-coredns--66bc5c9577--k6lzq-eth0" Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.526 [INFO][4984] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.530889 containerd[1458]: 2026-01-24 00:31:15.528 [INFO][4976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc" Jan 24 00:31:15.533122 containerd[1458]: time="2026-01-24T00:31:15.531090717Z" level=info msg="TearDown network for sandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\" successfully" Jan 24 00:31:15.533981 containerd[1458]: time="2026-01-24T00:31:15.533941548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:15.534067 containerd[1458]: time="2026-01-24T00:31:15.534023378Z" level=info msg="RemovePodSandbox \"a2d01a922c73526e5044e30d0d57e5b22c481c2092c6fb019a140929049bc4dc\" returns successfully" Jan 24 00:31:15.536051 containerd[1458]: time="2026-01-24T00:31:15.534987718Z" level=info msg="StopPodSandbox for \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\"" Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.570 [WARNING][4998] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0", GenerateName:"calico-kube-controllers-7d4dbbbd84-", Namespace:"calico-system", SelfLink:"", UID:"6177d0af-c7ec-41af-a5e7-d14d37e79e3f", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4dbbbd84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9", Pod:"calico-kube-controllers-7d4dbbbd84-pgmv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdf24e18151", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.570 [INFO][4998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.570 [INFO][4998] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" iface="eth0" netns="" Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.570 [INFO][4998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.570 [INFO][4998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.606 [INFO][5005] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" HandleID="k8s-pod-network.2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.606 [INFO][5005] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.607 [INFO][5005] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.614 [WARNING][5005] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" HandleID="k8s-pod-network.2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.614 [INFO][5005] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" HandleID="k8s-pod-network.2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.618 [INFO][5005] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.622514 containerd[1458]: 2026-01-24 00:31:15.620 [INFO][4998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:31:15.623307 containerd[1458]: time="2026-01-24T00:31:15.622967826Z" level=info msg="TearDown network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\" successfully" Jan 24 00:31:15.623307 containerd[1458]: time="2026-01-24T00:31:15.623026836Z" level=info msg="StopPodSandbox for \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\" returns successfully" Jan 24 00:31:15.623874 containerd[1458]: time="2026-01-24T00:31:15.623758306Z" level=info msg="RemovePodSandbox for \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\"" Jan 24 00:31:15.623874 containerd[1458]: time="2026-01-24T00:31:15.623800626Z" level=info msg="Forcibly stopping sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\"" Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.660 [WARNING][5019] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0", GenerateName:"calico-kube-controllers-7d4dbbbd84-", Namespace:"calico-system", SelfLink:"", UID:"6177d0af-c7ec-41af-a5e7-d14d37e79e3f", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4dbbbd84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"05426a46f9abf38932c7991bd5dc1ace0554b8d3a3638eae481fa3544b287ba9", Pod:"calico-kube-controllers-7d4dbbbd84-pgmv7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicdf24e18151", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.660 [INFO][5019] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.660 [INFO][5019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" iface="eth0" netns="" Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.660 [INFO][5019] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.660 [INFO][5019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.690 [INFO][5027] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" HandleID="k8s-pod-network.2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.690 [INFO][5027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.690 [INFO][5027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.695 [WARNING][5027] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" HandleID="k8s-pod-network.2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.695 [INFO][5027] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" HandleID="k8s-pod-network.2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Workload="172--234--200--204-k8s-calico--kube--controllers--7d4dbbbd84--pgmv7-eth0" Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.697 [INFO][5027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.701102 containerd[1458]: 2026-01-24 00:31:15.699 [INFO][5019] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936" Jan 24 00:31:15.701591 containerd[1458]: time="2026-01-24T00:31:15.701139181Z" level=info msg="TearDown network for sandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\" successfully" Jan 24 00:31:15.704054 containerd[1458]: time="2026-01-24T00:31:15.704024652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:15.704134 containerd[1458]: time="2026-01-24T00:31:15.704069102Z" level=info msg="RemovePodSandbox \"2d1cab1c2c68182ef15fd3ef9a3e1d7c5cc7b90b46ce59f4405a640fe16a4936\" returns successfully" Jan 24 00:31:15.704592 containerd[1458]: time="2026-01-24T00:31:15.704571322Z" level=info msg="StopPodSandbox for \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\"" Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.740 [WARNING][5041] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" WorkloadEndpoint="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.741 [INFO][5041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.741 [INFO][5041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" iface="eth0" netns="" Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.741 [INFO][5041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.741 [INFO][5041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.763 [INFO][5048] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" HandleID="k8s-pod-network.df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Workload="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.763 [INFO][5048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.763 [INFO][5048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.769 [WARNING][5048] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" HandleID="k8s-pod-network.df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Workload="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.769 [INFO][5048] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" HandleID="k8s-pod-network.df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Workload="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.771 [INFO][5048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.777915 containerd[1458]: 2026-01-24 00:31:15.773 [INFO][5041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:31:15.777915 containerd[1458]: time="2026-01-24T00:31:15.777739555Z" level=info msg="TearDown network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\" successfully" Jan 24 00:31:15.777915 containerd[1458]: time="2026-01-24T00:31:15.777764245Z" level=info msg="StopPodSandbox for \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\" returns successfully" Jan 24 00:31:15.780801 containerd[1458]: time="2026-01-24T00:31:15.780163296Z" level=info msg="RemovePodSandbox for \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\"" Jan 24 00:31:15.780801 containerd[1458]: time="2026-01-24T00:31:15.780219176Z" level=info msg="Forcibly stopping sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\"" Jan 24 00:31:15.820456 kubelet[2533]: E0124 00:31:15.820402 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.810 [WARNING][5062] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" WorkloadEndpoint="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.811 [INFO][5062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.811 [INFO][5062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" iface="eth0" netns="" Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.811 [INFO][5062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.811 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.836 [INFO][5069] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" HandleID="k8s-pod-network.df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Workload="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.837 [INFO][5069] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.837 [INFO][5069] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.844 [WARNING][5069] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" HandleID="k8s-pod-network.df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Workload="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.844 [INFO][5069] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" HandleID="k8s-pod-network.df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Workload="172--234--200--204-k8s-whisker--59f77f478b--khbcv-eth0" Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.846 [INFO][5069] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.851114 containerd[1458]: 2026-01-24 00:31:15.848 [INFO][5062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e" Jan 24 00:31:15.851114 containerd[1458]: time="2026-01-24T00:31:15.850282438Z" level=info msg="TearDown network for sandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\" successfully" Jan 24 00:31:15.854139 containerd[1458]: time="2026-01-24T00:31:15.853978049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:15.854139 containerd[1458]: time="2026-01-24T00:31:15.854047959Z" level=info msg="RemovePodSandbox \"df1a31d155bda580be39d07c8a8d2b6d2c342a27ecd7286995245df556befe5e\" returns successfully" Jan 24 00:31:15.854951 containerd[1458]: time="2026-01-24T00:31:15.854925970Z" level=info msg="StopPodSandbox for \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\"" Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.893 [WARNING][5083] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0", GenerateName:"calico-apiserver-94fb7866c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a9ec77a-a441-4797-ab37-24de3d316a35", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94fb7866c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209", Pod:"calico-apiserver-94fb7866c-2j9nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0bb91ddbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.893 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.893 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" iface="eth0" netns="" Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.893 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.893 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.915 [INFO][5090] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" HandleID="k8s-pod-network.9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.915 [INFO][5090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.915 [INFO][5090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.921 [WARNING][5090] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" HandleID="k8s-pod-network.9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.921 [INFO][5090] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" HandleID="k8s-pod-network.9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.923 [INFO][5090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.928132 containerd[1458]: 2026-01-24 00:31:15.925 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:31:15.928650 containerd[1458]: time="2026-01-24T00:31:15.928256433Z" level=info msg="TearDown network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\" successfully" Jan 24 00:31:15.929296 containerd[1458]: time="2026-01-24T00:31:15.928692093Z" level=info msg="StopPodSandbox for \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\" returns successfully" Jan 24 00:31:15.929296 containerd[1458]: time="2026-01-24T00:31:15.929278953Z" level=info msg="RemovePodSandbox for \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\"" Jan 24 00:31:15.929397 containerd[1458]: time="2026-01-24T00:31:15.929304623Z" level=info msg="Forcibly stopping sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\"" Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.960 [WARNING][5105] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0", GenerateName:"calico-apiserver-94fb7866c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a9ec77a-a441-4797-ab37-24de3d316a35", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94fb7866c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"868ccf07e0dafbd8d3cefaba4688d3fe9fc50f068560acbaed39d1f77eeae209", Pod:"calico-apiserver-94fb7866c-2j9nd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb0bb91ddbf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.960 [INFO][5105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.960 [INFO][5105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" iface="eth0" netns="" Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.960 [INFO][5105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.960 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.983 [INFO][5112] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" HandleID="k8s-pod-network.9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.983 [INFO][5112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.983 [INFO][5112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.988 [WARNING][5112] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" HandleID="k8s-pod-network.9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.988 [INFO][5112] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" HandleID="k8s-pod-network.9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Workload="172--234--200--204-k8s-calico--apiserver--94fb7866c--2j9nd-eth0" Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.989 [INFO][5112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:15.993554 containerd[1458]: 2026-01-24 00:31:15.991 [INFO][5105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965" Jan 24 00:31:15.993989 containerd[1458]: time="2026-01-24T00:31:15.993585584Z" level=info msg="TearDown network for sandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\" successfully" Jan 24 00:31:15.996335 containerd[1458]: time="2026-01-24T00:31:15.996311665Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:15.996426 containerd[1458]: time="2026-01-24T00:31:15.996353485Z" level=info msg="RemovePodSandbox \"9dc8159fa5f5c39407b3e49e97ed9ec52ad064e5bec1b5a9568ff04c83cd8965\" returns successfully" Jan 24 00:31:15.996790 containerd[1458]: time="2026-01-24T00:31:15.996769025Z" level=info msg="StopPodSandbox for \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\"" Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.030 [WARNING][5126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"abb28b3a-6878-432c-ab4c-0e09969f7334", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5", Pod:"goldmane-7c778bb748-pq45k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali382fc6e3943", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.030 [INFO][5126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.030 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" iface="eth0" netns="" Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.030 [INFO][5126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.030 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.053 [INFO][5133] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" HandleID="k8s-pod-network.5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.054 [INFO][5133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.054 [INFO][5133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.060 [WARNING][5133] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" HandleID="k8s-pod-network.5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.060 [INFO][5133] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" HandleID="k8s-pod-network.5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.062 [INFO][5133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:16.066986 containerd[1458]: 2026-01-24 00:31:16.064 [INFO][5126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:31:16.067489 containerd[1458]: time="2026-01-24T00:31:16.067446586Z" level=info msg="TearDown network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\" successfully" Jan 24 00:31:16.067489 containerd[1458]: time="2026-01-24T00:31:16.067475256Z" level=info msg="StopPodSandbox for \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\" returns successfully" Jan 24 00:31:16.067954 containerd[1458]: time="2026-01-24T00:31:16.067932226Z" level=info msg="RemovePodSandbox for \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\"" Jan 24 00:31:16.068060 containerd[1458]: time="2026-01-24T00:31:16.067959606Z" level=info msg="Forcibly stopping sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\"" Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.103 [WARNING][5148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"abb28b3a-6878-432c-ab4c-0e09969f7334", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 30, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-200-204", ContainerID:"d577b7e6a3001bf3a0888910b9d4c57ae0ae7453a97c41a749d1ac01262364d5", Pod:"goldmane-7c778bb748-pq45k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali382fc6e3943", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.103 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.103 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" iface="eth0" netns="" Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.103 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.103 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.134 [INFO][5155] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" HandleID="k8s-pod-network.5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.135 [INFO][5155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.135 [INFO][5155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.139 [WARNING][5155] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" HandleID="k8s-pod-network.5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.139 [INFO][5155] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" HandleID="k8s-pod-network.5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Workload="172--234--200--204-k8s-goldmane--7c778bb748--pq45k-eth0" Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.141 [INFO][5155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:31:16.145968 containerd[1458]: 2026-01-24 00:31:16.143 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254" Jan 24 00:31:16.146349 containerd[1458]: time="2026-01-24T00:31:16.146023649Z" level=info msg="TearDown network for sandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\" successfully" Jan 24 00:31:16.152405 containerd[1458]: time="2026-01-24T00:31:16.152137891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:31:16.152405 containerd[1458]: time="2026-01-24T00:31:16.152179281Z" level=info msg="RemovePodSandbox \"5d2d475c17a039ad8d33419995920de63ca89c1802fa5d67f8467a1cdbe41254\" returns successfully" Jan 24 00:31:18.815686 kubelet[2533]: E0124 00:31:18.815357 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:31:21.814752 kubelet[2533]: E0124 00:31:21.814613 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:31:22.817456 kubelet[2533]: E0124 00:31:22.817411 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:31:23.815300 kubelet[2533]: E0124 00:31:23.815195 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:31:24.816341 kubelet[2533]: E0124 00:31:24.815826 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:31:26.814662 containerd[1458]: time="2026-01-24T00:31:26.814529307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:31:26.948068 containerd[1458]: time="2026-01-24T00:31:26.948020472Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:26.949142 containerd[1458]: time="2026-01-24T00:31:26.949031758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:31:26.949142 containerd[1458]: time="2026-01-24T00:31:26.949095009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:31:26.949275 kubelet[2533]: E0124 00:31:26.949239 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:31:26.950068 kubelet[2533]: E0124 00:31:26.949278 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:31:26.950068 kubelet[2533]: E0124 00:31:26.949342 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86547fc664-566mp_calico-system(1f8681ee-3380-4dd8-9bb7-c40be678fb1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:26.950399 containerd[1458]: time="2026-01-24T00:31:26.950299308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:31:27.089346 containerd[1458]: time="2026-01-24T00:31:27.088784405Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:27.091135 containerd[1458]: time="2026-01-24T00:31:27.089666808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:31:27.091135 containerd[1458]: time="2026-01-24T00:31:27.089736549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:31:27.091308 kubelet[2533]: E0124 00:31:27.089921 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:31:27.091308 kubelet[2533]: E0124 00:31:27.089973 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:31:27.091308 kubelet[2533]: E0124 00:31:27.090063 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86547fc664-566mp_calico-system(1f8681ee-3380-4dd8-9bb7-c40be678fb1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:27.091413 kubelet[2533]: E0124 00:31:27.090102 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:31:29.815590 containerd[1458]: time="2026-01-24T00:31:29.815077622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:31:29.949215 containerd[1458]: time="2026-01-24T00:31:29.949140903Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:29.950065 containerd[1458]: time="2026-01-24T00:31:29.950024206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:31:29.950165 containerd[1458]: time="2026-01-24T00:31:29.950100187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:29.950306 kubelet[2533]: E0124 00:31:29.950261 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:31:29.950670 kubelet[2533]: E0124 00:31:29.950299 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:31:29.950670 kubelet[2533]: E0124 00:31:29.950370 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pq45k_calico-system(abb28b3a-6878-432c-ab4c-0e09969f7334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:29.950670 kubelet[2533]: E0124 00:31:29.950398 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:31:31.814581 kubelet[2533]: E0124 00:31:31.814267 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:31:34.820815 containerd[1458]: time="2026-01-24T00:31:34.820362306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:31:34.965302 containerd[1458]: time="2026-01-24T00:31:34.965242351Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:34.966132 containerd[1458]: time="2026-01-24T00:31:34.966102382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:31:34.966229 containerd[1458]: time="2026-01-24T00:31:34.966165643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:31:34.966460 kubelet[2533]: E0124 00:31:34.966423 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:31:34.966811 kubelet[2533]: E0124 00:31:34.966467 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:31:34.966811 kubelet[2533]: E0124 00:31:34.966537 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d4dbbbd84-pgmv7_calico-system(6177d0af-c7ec-41af-a5e7-d14d37e79e3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:34.966811 kubelet[2533]: E0124 00:31:34.966571 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:31:35.814285 kubelet[2533]: E0124 00:31:35.814240 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:31:36.816869 containerd[1458]: time="2026-01-24T00:31:36.816259460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:31:36.953303 containerd[1458]: time="2026-01-24T00:31:36.953233211Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:36.954237 containerd[1458]: time="2026-01-24T00:31:36.954194152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:31:36.954343 containerd[1458]: time="2026-01-24T00:31:36.954312764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:31:36.954602 kubelet[2533]: E0124 00:31:36.954550 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:31:36.956308 kubelet[2533]: E0124 00:31:36.954622 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:31:36.956308 kubelet[2533]: E0124 00:31:36.954828 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:36.956428 containerd[1458]: time="2026-01-24T00:31:36.955432877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:31:37.089574 containerd[1458]: time="2026-01-24T00:31:37.089218711Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:37.090698 containerd[1458]: time="2026-01-24T00:31:37.090586617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:31:37.090698 containerd[1458]: time="2026-01-24T00:31:37.090659188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:37.091269 kubelet[2533]: E0124 00:31:37.091067 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:37.091269 kubelet[2533]: E0124 00:31:37.091137 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:37.092328 kubelet[2533]: E0124 00:31:37.091696 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-94fb7866c-2j9nd_calico-apiserver(7a9ec77a-a441-4797-ab37-24de3d316a35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:37.092328 kubelet[2533]: E0124 00:31:37.091742 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:31:37.092523 containerd[1458]: time="2026-01-24T00:31:37.092043814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:31:37.223563 containerd[1458]: time="2026-01-24T00:31:37.223479555Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:37.224567 containerd[1458]: time="2026-01-24T00:31:37.224526827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:31:37.224655 containerd[1458]: time="2026-01-24T00:31:37.224612618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:31:37.226016 kubelet[2533]: E0124 00:31:37.224809 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:31:37.226016 kubelet[2533]: E0124 00:31:37.224861 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:31:37.226016 kubelet[2533]: E0124 00:31:37.224936 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:37.226208 kubelet[2533]: E0124 00:31:37.224986 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:31:37.817154 containerd[1458]: time="2026-01-24T00:31:37.816881951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:31:37.958091 containerd[1458]: time="2026-01-24T00:31:37.958031037Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:31:37.959174 containerd[1458]: time="2026-01-24T00:31:37.959105170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:31:37.959248 containerd[1458]: time="2026-01-24T00:31:37.959211171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:31:37.960312 kubelet[2533]: E0124 00:31:37.959402 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:37.960312 kubelet[2533]: E0124 00:31:37.959440 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:31:37.960312 kubelet[2533]: E0124 00:31:37.959500 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-94fb7866c-6mcp2_calico-apiserver(a484550e-d179-4ca3-a2ad-d4ef7f1868f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:31:37.960312 kubelet[2533]: E0124 00:31:37.959533 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:31:40.821301 kubelet[2533]: E0124 00:31:40.821234 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:31:41.815715 kubelet[2533]: E0124 00:31:41.815323 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:31:42.814220 kubelet[2533]: E0124 00:31:42.813690 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:31:45.814899 kubelet[2533]: E0124 00:31:45.814633 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:31:47.814924 kubelet[2533]: E0124 00:31:47.814713 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:31:49.814738 kubelet[2533]: E0124 00:31:49.814685 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:31:50.816284 kubelet[2533]: E0124 00:31:50.816139 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:31:52.815855 kubelet[2533]: E0124 00:31:52.814792 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:31:52.818773 kubelet[2533]: E0124 00:31:52.817925 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:31:52.819325 kubelet[2533]: E0124 00:31:52.818120 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:31:52.819899 kubelet[2533]: E0124 00:31:52.819566 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:31:55.542832 systemd[1]: run-containerd-runc-k8s.io-43f8250a43fa222e872a52b43ba279b0445397013a21771d8059b23936caa2a8-runc.gMBhDg.mount: Deactivated successfully. Jan 24 00:31:59.814484 kubelet[2533]: E0124 00:31:59.814434 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:32:02.815145 kubelet[2533]: E0124 00:32:02.814496 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:32:03.817636 kubelet[2533]: E0124 00:32:03.817497 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:32:03.821216 kubelet[2533]: E0124 00:32:03.821123 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:32:04.817708 kubelet[2533]: E0124 00:32:04.817169 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:32:05.815033 kubelet[2533]: E0124 00:32:05.814496 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:32:05.816068 kubelet[2533]: E0124 00:32:05.815984 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:32:07.815167 kubelet[2533]: E0124 00:32:07.814910 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:32:10.818027 kubelet[2533]: E0124 00:32:10.817772 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:32:13.814716 kubelet[2533]: E0124 00:32:13.814657 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:32:14.817175 kubelet[2533]: E0124 00:32:14.816373 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:32:15.816446 containerd[1458]: time="2026-01-24T00:32:15.816171298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:32:16.028548 containerd[1458]: time="2026-01-24T00:32:16.028464523Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:16.032559 containerd[1458]: time="2026-01-24T00:32:16.030178691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:32:16.032559 containerd[1458]: time="2026-01-24T00:32:16.030291732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:32:16.032746 kubelet[2533]: E0124 00:32:16.030714 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:32:16.032746 kubelet[2533]: E0124 00:32:16.030761 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:32:16.032746 kubelet[2533]: E0124 00:32:16.030853 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86547fc664-566mp_calico-system(1f8681ee-3380-4dd8-9bb7-c40be678fb1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:16.034518 containerd[1458]: time="2026-01-24T00:32:16.034276170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:32:16.181510 containerd[1458]: time="2026-01-24T00:32:16.181263098Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:16.182511 containerd[1458]: time="2026-01-24T00:32:16.182411844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:32:16.183148 containerd[1458]: time="2026-01-24T00:32:16.182593974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:32:16.183215 kubelet[2533]: E0124 00:32:16.182734 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:32:16.183215 kubelet[2533]: E0124 00:32:16.182765 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:32:16.183215 kubelet[2533]: E0124 00:32:16.182824 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86547fc664-566mp_calico-system(1f8681ee-3380-4dd8-9bb7-c40be678fb1b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:16.183380 kubelet[2533]: E0124 00:32:16.182859 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:32:16.815519 kubelet[2533]: E0124 00:32:16.815355 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:32:16.819222 containerd[1458]: time="2026-01-24T00:32:16.816670964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:32:16.953494 containerd[1458]: time="2026-01-24T00:32:16.953439343Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:16.954458 containerd[1458]: time="2026-01-24T00:32:16.954407048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:32:16.954458 containerd[1458]: time="2026-01-24T00:32:16.954473678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:32:16.954693 kubelet[2533]: E0124 00:32:16.954610 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:32:16.954693 kubelet[2533]: E0124 00:32:16.954643 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:32:16.954693 kubelet[2533]: E0124 00:32:16.954695 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pq45k_calico-system(abb28b3a-6878-432c-ab4c-0e09969f7334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:16.954853 kubelet[2533]: E0124 00:32:16.954721 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:32:21.815353 containerd[1458]: time="2026-01-24T00:32:21.815109290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:32:21.946330 containerd[1458]: time="2026-01-24T00:32:21.946281397Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:21.947179 containerd[1458]: time="2026-01-24T00:32:21.947125381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:32:21.947250 containerd[1458]: time="2026-01-24T00:32:21.947212981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:32:21.947375 kubelet[2533]: E0124 00:32:21.947325 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:32:21.947695 kubelet[2533]: E0124 00:32:21.947375 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:32:21.947695 kubelet[2533]: E0124 00:32:21.947433 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-94fb7866c-6mcp2_calico-apiserver(a484550e-d179-4ca3-a2ad-d4ef7f1868f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:21.947695 kubelet[2533]: E0124 00:32:21.947462 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:32:25.556682 systemd[1]: run-containerd-runc-k8s.io-43f8250a43fa222e872a52b43ba279b0445397013a21771d8059b23936caa2a8-runc.bJBH9H.mount: Deactivated successfully. Jan 24 00:32:26.818209 containerd[1458]: time="2026-01-24T00:32:26.817593287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:32:26.951744 containerd[1458]: time="2026-01-24T00:32:26.951668657Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:26.953525 containerd[1458]: time="2026-01-24T00:32:26.953487475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:32:26.953603 containerd[1458]: time="2026-01-24T00:32:26.953570375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:32:26.954282 kubelet[2533]: E0124 00:32:26.953729 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:32:26.954282 kubelet[2533]: E0124 00:32:26.953784 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:32:26.954282 kubelet[2533]: E0124 00:32:26.953964 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d4dbbbd84-pgmv7_calico-system(6177d0af-c7ec-41af-a5e7-d14d37e79e3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:26.954282 kubelet[2533]: E0124 00:32:26.954009 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:32:26.955372 containerd[1458]: time="2026-01-24T00:32:26.954901670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:32:27.088970 containerd[1458]: time="2026-01-24T00:32:27.088850734Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:27.089856 containerd[1458]: time="2026-01-24T00:32:27.089818748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:32:27.089950 containerd[1458]: time="2026-01-24T00:32:27.089898048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:32:27.090279 kubelet[2533]: E0124 00:32:27.090160 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:32:27.090279 kubelet[2533]: E0124 00:32:27.090207 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:32:27.090348 kubelet[2533]: E0124 00:32:27.090301 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:27.092907 containerd[1458]: time="2026-01-24T00:32:27.092719449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:32:27.226575 containerd[1458]: time="2026-01-24T00:32:27.226383979Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:27.227361 containerd[1458]: time="2026-01-24T00:32:27.227238662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:32:27.227500 containerd[1458]: time="2026-01-24T00:32:27.227445583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:32:27.227846 kubelet[2533]: E0124 00:32:27.227797 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:32:27.228068 kubelet[2533]: E0124 00:32:27.227951 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:32:27.228219 kubelet[2533]: E0124 00:32:27.228144 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-484vf_calico-system(974bf216-052b-49fa-b0ab-b6a46ee1fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:27.228403 kubelet[2533]: E0124 00:32:27.228307 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:32:28.816814 containerd[1458]: time="2026-01-24T00:32:28.816206560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:32:28.954584 containerd[1458]: time="2026-01-24T00:32:28.954382088Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:32:28.956502 containerd[1458]: time="2026-01-24T00:32:28.956452306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:32:28.956706 containerd[1458]: time="2026-01-24T00:32:28.956522006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:32:28.957038 kubelet[2533]: E0124 00:32:28.956830 2533 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:32:28.957038 kubelet[2533]: E0124 00:32:28.956889 2533 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:32:28.957038 kubelet[2533]: E0124 00:32:28.956967 2533 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-94fb7866c-2j9nd_calico-apiserver(7a9ec77a-a441-4797-ab37-24de3d316a35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:32:28.957877 kubelet[2533]: E0124 00:32:28.957246 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:32:30.818092 kubelet[2533]: E0124 00:32:30.816935 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:32:30.822849 kubelet[2533]: E0124 00:32:30.822783 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:32:31.682757 systemd[1]: Started sshd@7-172.234.200.204:22-68.220.241.50:59554.service - OpenSSH per-connection server daemon (68.220.241.50:59554). Jan 24 00:32:31.834168 sshd[5278]: Accepted publickey for core from 68.220.241.50 port 59554 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:31.837741 sshd[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:31.846482 systemd-logind[1448]: New session 8 of user core. Jan 24 00:32:31.851137 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:32:32.054929 sshd[5278]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:32.059665 systemd[1]: sshd@7-172.234.200.204:22-68.220.241.50:59554.service: Deactivated successfully. Jan 24 00:32:32.062947 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:32:32.064749 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:32:32.065871 systemd-logind[1448]: Removed session 8. Jan 24 00:32:36.815592 kubelet[2533]: E0124 00:32:36.814912 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:32:37.097467 systemd[1]: Started sshd@8-172.234.200.204:22-68.220.241.50:43522.service - OpenSSH per-connection server daemon (68.220.241.50:43522). Jan 24 00:32:37.259772 sshd[5295]: Accepted publickey for core from 68.220.241.50 port 43522 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:37.261391 sshd[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:37.266312 systemd-logind[1448]: New session 9 of user core. Jan 24 00:32:37.273140 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:32:37.449616 sshd[5295]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:37.454792 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:32:37.455890 systemd[1]: sshd@8-172.234.200.204:22-68.220.241.50:43522.service: Deactivated successfully. Jan 24 00:32:37.460482 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:32:37.462590 systemd-logind[1448]: Removed session 9. Jan 24 00:32:38.814126 kubelet[2533]: E0124 00:32:38.814083 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:32:39.817535 kubelet[2533]: E0124 00:32:39.817488 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:32:39.818103 kubelet[2533]: E0124 00:32:39.817811 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:32:40.817271 kubelet[2533]: E0124 00:32:40.817207 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:32:42.491144 systemd[1]: Started sshd@9-172.234.200.204:22-68.220.241.50:55122.service - OpenSSH per-connection server daemon (68.220.241.50:55122). Jan 24 00:32:42.669120 sshd[5309]: Accepted publickey for core from 68.220.241.50 port 55122 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:42.670730 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:42.677709 systemd-logind[1448]: New session 10 of user core. Jan 24 00:32:42.679139 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:32:42.816888 kubelet[2533]: E0124 00:32:42.816842 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:32:42.903427 sshd[5309]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:42.908702 systemd[1]: sshd@9-172.234.200.204:22-68.220.241.50:55122.service: Deactivated successfully. Jan 24 00:32:42.911928 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:32:42.914279 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:32:42.915314 systemd-logind[1448]: Removed session 10. Jan 24 00:32:42.942075 systemd[1]: Started sshd@10-172.234.200.204:22-68.220.241.50:55138.service - OpenSSH per-connection server daemon (68.220.241.50:55138). Jan 24 00:32:43.108113 sshd[5323]: Accepted publickey for core from 68.220.241.50 port 55138 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:43.109767 sshd[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:43.120183 systemd-logind[1448]: New session 11 of user core. Jan 24 00:32:43.125164 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:32:43.348480 sshd[5323]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:43.352500 systemd[1]: sshd@10-172.234.200.204:22-68.220.241.50:55138.service: Deactivated successfully. Jan 24 00:32:43.352747 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:32:43.356919 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:32:43.359425 systemd-logind[1448]: Removed session 11. Jan 24 00:32:43.373887 systemd[1]: Started sshd@11-172.234.200.204:22-68.220.241.50:55142.service - OpenSSH per-connection server daemon (68.220.241.50:55142). Jan 24 00:32:43.523558 sshd[5338]: Accepted publickey for core from 68.220.241.50 port 55142 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:43.526164 sshd[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:43.531674 systemd-logind[1448]: New session 12 of user core. Jan 24 00:32:43.538146 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:32:43.718571 sshd[5338]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:43.725937 systemd[1]: sshd@11-172.234.200.204:22-68.220.241.50:55142.service: Deactivated successfully. Jan 24 00:32:43.729886 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:32:43.731061 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:32:43.733476 systemd-logind[1448]: Removed session 12. Jan 24 00:32:44.815956 kubelet[2533]: E0124 00:32:44.815532 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:32:48.758221 systemd[1]: Started sshd@12-172.234.200.204:22-68.220.241.50:55146.service - OpenSSH per-connection server daemon (68.220.241.50:55146). Jan 24 00:32:48.923886 sshd[5351]: Accepted publickey for core from 68.220.241.50 port 55146 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:48.925425 sshd[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:48.931104 systemd-logind[1448]: New session 13 of user core. Jan 24 00:32:48.941138 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:32:49.132755 sshd[5351]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:49.136459 systemd[1]: sshd@12-172.234.200.204:22-68.220.241.50:55146.service: Deactivated successfully. Jan 24 00:32:49.138663 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:32:49.140606 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:32:49.143911 systemd-logind[1448]: Removed session 13. Jan 24 00:32:49.167225 systemd[1]: Started sshd@13-172.234.200.204:22-68.220.241.50:55156.service - OpenSSH per-connection server daemon (68.220.241.50:55156). Jan 24 00:32:49.312504 sshd[5364]: Accepted publickey for core from 68.220.241.50 port 55156 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:49.314527 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:49.321049 systemd-logind[1448]: New session 14 of user core. Jan 24 00:32:49.326165 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:32:49.650906 sshd[5364]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:49.656420 systemd[1]: sshd@13-172.234.200.204:22-68.220.241.50:55156.service: Deactivated successfully. Jan 24 00:32:49.659416 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:32:49.662283 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:32:49.663836 systemd-logind[1448]: Removed session 14. Jan 24 00:32:49.686806 systemd[1]: Started sshd@14-172.234.200.204:22-68.220.241.50:55168.service - OpenSSH per-connection server daemon (68.220.241.50:55168). Jan 24 00:32:49.833695 sshd[5375]: Accepted publickey for core from 68.220.241.50 port 55168 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:49.835705 sshd[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:49.841141 systemd-logind[1448]: New session 15 of user core. Jan 24 00:32:49.847281 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:32:50.444577 sshd[5375]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:50.449355 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:32:50.452538 systemd[1]: sshd@14-172.234.200.204:22-68.220.241.50:55168.service: Deactivated successfully. Jan 24 00:32:50.455907 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:32:50.459531 systemd-logind[1448]: Removed session 15. Jan 24 00:32:50.477244 systemd[1]: Started sshd@15-172.234.200.204:22-68.220.241.50:55182.service - OpenSSH per-connection server daemon (68.220.241.50:55182). Jan 24 00:32:50.622979 sshd[5391]: Accepted publickey for core from 68.220.241.50 port 55182 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:50.624583 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:50.631668 systemd-logind[1448]: New session 16 of user core. Jan 24 00:32:50.634144 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:32:50.818037 kubelet[2533]: E0124 00:32:50.817388 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:32:50.966950 sshd[5391]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:50.969937 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:32:50.971156 systemd[1]: sshd@15-172.234.200.204:22-68.220.241.50:55182.service: Deactivated successfully. Jan 24 00:32:50.973831 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:32:50.977902 systemd-logind[1448]: Removed session 16. Jan 24 00:32:51.003855 systemd[1]: Started sshd@16-172.234.200.204:22-68.220.241.50:55188.service - OpenSSH per-connection server daemon (68.220.241.50:55188). Jan 24 00:32:51.148459 sshd[5402]: Accepted publickey for core from 68.220.241.50 port 55188 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:51.150346 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:51.159743 systemd-logind[1448]: New session 17 of user core. Jan 24 00:32:51.161981 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:32:51.348865 sshd[5402]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:51.356308 systemd[1]: sshd@16-172.234.200.204:22-68.220.241.50:55188.service: Deactivated successfully. Jan 24 00:32:51.356595 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:32:51.359135 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:32:51.361508 systemd-logind[1448]: Removed session 17. Jan 24 00:32:51.815179 kubelet[2533]: E0124 00:32:51.815134 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:32:52.818217 kubelet[2533]: E0124 00:32:52.818145 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:32:54.816327 kubelet[2533]: E0124 00:32:54.816068 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:32:55.815176 kubelet[2533]: E0124 00:32:55.814358 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:32:55.816926 kubelet[2533]: E0124 00:32:55.816890 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:32:56.392345 systemd[1]: Started sshd@17-172.234.200.204:22-68.220.241.50:35190.service - OpenSSH per-connection server daemon (68.220.241.50:35190). Jan 24 00:32:56.553866 sshd[5441]: Accepted publickey for core from 68.220.241.50 port 35190 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:32:56.554578 sshd[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:32:56.559451 systemd-logind[1448]: New session 18 of user core. Jan 24 00:32:56.566169 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:32:56.749243 sshd[5441]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:56.758930 systemd[1]: sshd@17-172.234.200.204:22-68.220.241.50:35190.service: Deactivated successfully. Jan 24 00:32:56.762220 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:32:56.763548 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:32:56.765361 systemd-logind[1448]: Removed session 18. Jan 24 00:32:56.818767 kubelet[2533]: E0124 00:32:56.818655 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:33:01.786418 systemd[1]: Started sshd@18-172.234.200.204:22-68.220.241.50:35202.service - OpenSSH per-connection server daemon (68.220.241.50:35202). Jan 24 00:33:01.931769 sshd[5454]: Accepted publickey for core from 68.220.241.50 port 35202 ssh2: RSA SHA256:F6ggEkBgySDLJWyp4ASY8nqziNzyaI/r3/gkxYJ8Qu4 Jan 24 00:33:01.933155 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:01.939832 systemd-logind[1448]: New session 19 of user core. Jan 24 00:33:01.945155 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:33:02.150227 sshd[5454]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:02.154567 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:33:02.156576 systemd[1]: sshd@18-172.234.200.204:22-68.220.241.50:35202.service: Deactivated successfully. Jan 24 00:33:02.161914 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:33:02.165136 systemd-logind[1448]: Removed session 19. Jan 24 00:33:03.816682 kubelet[2533]: E0124 00:33:03.816634 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:33:03.818552 kubelet[2533]: E0124 00:33:03.817941 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:33:03.819971 kubelet[2533]: E0124 00:33:03.819884 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:33:07.814530 kubelet[2533]: E0124 00:33:07.814488 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:33:09.814476 kubelet[2533]: E0124 00:33:09.814417 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:33:10.816894 kubelet[2533]: E0124 00:33:10.816853 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:33:11.816025 kubelet[2533]: E0124 00:33:11.814177 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:33:11.816191 kubelet[2533]: E0124 00:33:11.816075 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b" Jan 24 00:33:14.814977 kubelet[2533]: E0124 00:33:14.814620 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-2j9nd" podUID="7a9ec77a-a441-4797-ab37-24de3d316a35" Jan 24 00:33:14.817594 kubelet[2533]: E0124 00:33:14.816104 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-484vf" podUID="974bf216-052b-49fa-b0ab-b6a46ee1fdcb" Jan 24 00:33:17.814923 kubelet[2533]: E0124 00:33:17.814025 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:33:17.815399 kubelet[2533]: E0124 00:33:17.815060 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-94fb7866c-6mcp2" podUID="a484550e-d179-4ca3-a2ad-d4ef7f1868f9" Jan 24 00:33:18.814742 kubelet[2533]: E0124 00:33:18.814421 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pq45k" podUID="abb28b3a-6878-432c-ab4c-0e09969f7334" Jan 24 00:33:20.815424 kubelet[2533]: E0124 00:33:20.815052 2533 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 24 00:33:20.816328 kubelet[2533]: E0124 00:33:20.816048 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d4dbbbd84-pgmv7" podUID="6177d0af-c7ec-41af-a5e7-d14d37e79e3f" Jan 24 00:33:24.819453 kubelet[2533]: E0124 00:33:24.819348 2533 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86547fc664-566mp" podUID="1f8681ee-3380-4dd8-9bb7-c40be678fb1b"