Jan 23 01:06:47.964430 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:06:47.964466 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:06:47.964479 kernel: BIOS-provided physical RAM map: Jan 23 01:06:47.964486 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 23 01:06:47.964492 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 23 01:06:47.964498 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 01:06:47.964508 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 23 01:06:47.964514 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 23 01:06:47.964520 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 01:06:47.964526 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 01:06:47.964532 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:06:47.964538 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 01:06:47.964544 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 01:06:47.964550 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:06:47.964560 kernel: NX (Execute Disable) protection: active Jan 23 01:06:47.964566 kernel: APIC: Static calls initialized Jan 23 01:06:47.964572 kernel: SMBIOS 2.8 present. Jan 23 01:06:47.964579 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 23 01:06:47.964585 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:06:47.964592 kernel: Hypervisor detected: KVM Jan 23 01:06:47.964600 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 01:06:47.964606 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:06:47.964612 kernel: kvm-clock: using sched offset of 7161420849 cycles Jan 23 01:06:47.964619 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:06:47.964626 kernel: tsc: Detected 1999.996 MHz processor Jan 23 01:06:47.964633 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:06:47.964640 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:06:47.964647 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 23 01:06:47.964654 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 01:06:47.964660 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:06:47.964669 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 01:06:47.964675 kernel: Using GB pages for direct mapping Jan 23 01:06:47.964682 kernel: ACPI: Early table checksum verification disabled Jan 23 01:06:47.964688 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 23 01:06:47.964695 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:47.964701 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:47.964708 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:47.964715 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 23 01:06:47.964721 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:47.964948 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:47.964958 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:47.964965 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:06:47.964972 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 23 01:06:47.964979 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 23 01:06:47.964987 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 23 01:06:47.964994 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 23 01:06:47.965001 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 23 01:06:47.965008 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 23 01:06:47.965015 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 23 01:06:47.965021 kernel: No NUMA configuration found Jan 23 01:06:47.965028 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 01:06:47.965035 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 23 01:06:47.965042 kernel: Zone ranges: Jan 23 01:06:47.965051 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:06:47.965057 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 01:06:47.965064 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 01:06:47.965071 kernel: Device empty Jan 23 01:06:47.965078 kernel: Movable zone start for each node Jan 23 01:06:47.965084 kernel: Early memory node ranges Jan 23 01:06:47.965091 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 01:06:47.965098 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 23 01:06:47.965105 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 01:06:47.965112 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 01:06:47.965121 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:06:47.965128 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 01:06:47.965135 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 23 01:06:47.965142 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:06:47.965149 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:06:47.965156 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:06:47.965162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:06:47.965169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:06:47.965176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:06:47.965185 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:06:47.965192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:06:47.965198 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:06:47.965205 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:06:47.965212 kernel: TSC deadline timer available Jan 23 01:06:47.965219 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:06:47.965226 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:06:47.965232 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:06:47.965239 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:06:47.965248 kernel: CPU topo: Num. cores per package: 2 Jan 23 01:06:47.965254 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:06:47.965261 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:06:47.965268 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:06:47.965274 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 01:06:47.965281 kernel: kvm-guest: setup PV sched yield Jan 23 01:06:47.965288 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 01:06:47.965295 kernel: Booting paravirtualized kernel on KVM Jan 23 01:06:47.965302 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:06:47.965311 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:06:47.965318 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:06:47.965325 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:06:47.965331 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:06:47.965338 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:06:47.965345 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:06:47.965352 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:06:47.965360 kernel: random: crng init done Jan 23 01:06:47.965368 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:06:47.965375 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:06:47.965382 kernel: Fallback order for Node 0: 0 Jan 23 01:06:47.965389 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 23 01:06:47.965396 kernel: Policy zone: Normal Jan 23 01:06:47.965402 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:06:47.965409 kernel: software IO TLB: area num 2. Jan 23 01:06:47.965416 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:06:47.965422 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:06:47.965431 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:06:47.965438 kernel: Dynamic Preempt: voluntary Jan 23 01:06:47.965444 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:06:47.965452 kernel: rcu: RCU event tracing is enabled. Jan 23 01:06:47.965459 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:06:47.965466 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:06:47.965473 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:06:47.965479 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:06:47.965486 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:06:47.965493 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:06:47.965502 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:06:47.965516 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:06:47.965525 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:06:47.965532 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 01:06:47.965539 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:06:47.965546 kernel: Console: colour VGA+ 80x25 Jan 23 01:06:47.965553 kernel: printk: legacy console [tty0] enabled Jan 23 01:06:47.965560 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:06:47.965567 kernel: ACPI: Core revision 20240827 Jan 23 01:06:47.965576 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 01:06:47.965583 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:06:47.965590 kernel: x2apic enabled Jan 23 01:06:47.965597 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:06:47.965604 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 01:06:47.965611 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 01:06:47.965618 kernel: kvm-guest: setup PV IPIs Jan 23 01:06:47.965627 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 01:06:47.965635 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns Jan 23 01:06:47.965642 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999996) Jan 23 01:06:47.965649 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:06:47.965656 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 01:06:47.965663 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 01:06:47.965670 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:06:47.965677 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:06:47.965684 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:06:47.965693 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 01:06:47.965700 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 01:06:47.965708 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 01:06:47.965715 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 01:06:47.965722 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 01:06:47.965729 kernel: active return thunk: srso_alias_return_thunk Jan 23 01:06:47.965736 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 01:06:47.965743 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 01:06:47.965752 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:06:47.970221 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:06:47.970231 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:06:47.970238 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:06:47.970245 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 01:06:47.970253 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:06:47.970260 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 23 01:06:47.970267 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 23 01:06:47.970274 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:06:47.970286 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:06:47.970293 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:06:47.970300 kernel: landlock: Up and running. Jan 23 01:06:47.970307 kernel: SELinux: Initializing. Jan 23 01:06:47.970314 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:06:47.970322 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:06:47.970329 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 01:06:47.970336 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 23 01:06:47.970343 kernel: ... version: 0 Jan 23 01:06:47.970353 kernel: ... bit width: 48 Jan 23 01:06:47.970360 kernel: ... generic registers: 6 Jan 23 01:06:47.970367 kernel: ... value mask: 0000ffffffffffff Jan 23 01:06:47.970374 kernel: ... max period: 00007fffffffffff Jan 23 01:06:47.970381 kernel: ... fixed-purpose events: 0 Jan 23 01:06:47.970389 kernel: ... event mask: 000000000000003f Jan 23 01:06:47.970396 kernel: signal: max sigframe size: 3376 Jan 23 01:06:47.970403 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:06:47.970410 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:06:47.970420 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:06:47.970427 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:06:47.970434 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:06:47.970441 kernel: .... node #0, CPUs: #1 Jan 23 01:06:47.970448 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:06:47.970455 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Jan 23 01:06:47.970463 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 235480K reserved, 0K cma-reserved) Jan 23 01:06:47.970470 kernel: devtmpfs: initialized Jan 23 01:06:47.970477 kernel: x86/mm: Memory block size: 128MB Jan 23 01:06:47.970487 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:06:47.970494 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:06:47.970501 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:06:47.970508 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:06:47.970515 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:06:47.970523 kernel: audit: type=2000 audit(1769130404.638:1): state=initialized audit_enabled=0 res=1 Jan 23 01:06:47.970530 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:06:47.970537 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:06:47.970544 kernel: cpuidle: using governor menu Jan 23 01:06:47.970553 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:06:47.970560 kernel: dca service started, version 1.12.1 Jan 23 01:06:47.970568 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 01:06:47.970777 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 01:06:47.970784 kernel: PCI: Using configuration type 1 for base access Jan 23 01:06:47.970792 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:06:47.970799 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:06:47.970806 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:06:47.970813 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:06:47.970823 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:06:47.970830 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:06:47.970838 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:06:47.970845 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:06:47.970852 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:06:47.970859 kernel: ACPI: Interpreter enabled Jan 23 01:06:47.970866 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 01:06:47.970873 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:06:47.970880 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:06:47.970890 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:06:47.970897 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:06:47.970904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:06:47.971087 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:06:47.971219 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 01:06:47.971345 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 01:06:47.971355 kernel: PCI host bridge to bus 0000:00 Jan 23 01:06:47.971485 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:06:47.971623 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:06:47.971961 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:06:47.972126 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 01:06:47.972243 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 01:06:47.972355 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 23 01:06:47.972466 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:06:47.972821 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:06:47.972963 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:06:47.973089 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 01:06:47.973210 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 01:06:47.973329 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 01:06:47.973449 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:06:47.973580 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 23 01:06:47.973707 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 23 01:06:47.975884 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 01:06:47.976047 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 01:06:47.976186 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 01:06:47.976312 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 23 01:06:47.976434 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 01:06:47.976561 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 01:06:47.976682 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 01:06:47.976974 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:06:47.977099 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:06:47.977227 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:06:47.977349 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 23 01:06:47.977470 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 23 01:06:47.977603 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:06:47.979110 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 01:06:47.979131 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:06:47.979140 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:06:47.979147 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:06:47.979154 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:06:47.979162 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:06:47.979169 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:06:47.979181 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:06:47.979188 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:06:47.979195 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:06:47.979202 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:06:47.979209 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:06:47.979216 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:06:47.979223 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:06:47.979230 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:06:47.979238 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:06:47.979247 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:06:47.979254 kernel: iommu: Default domain type: Translated Jan 23 01:06:47.979262 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:06:47.979269 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:06:47.979276 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:06:47.979283 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 23 01:06:47.979290 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 23 01:06:47.979428 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:06:47.979557 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:06:47.979901 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:06:47.979912 kernel: vgaarb: loaded Jan 23 01:06:47.979919 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 01:06:47.979927 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 01:06:47.979934 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:06:47.979941 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:06:47.979949 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:06:47.979956 kernel: pnp: PnP ACPI init Jan 23 01:06:47.980094 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 01:06:47.980106 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:06:47.980114 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:06:47.980121 kernel: NET: Registered PF_INET protocol family Jan 23 01:06:47.980128 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:06:47.980136 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 01:06:47.980143 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:06:47.980151 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:06:47.980161 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 01:06:47.980168 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 01:06:47.980175 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:06:47.980182 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:06:47.980190 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:06:47.980197 kernel: NET: Registered PF_XDP protocol family Jan 23 01:06:47.980310 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:06:47.980422 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:06:47.980533 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:06:47.980649 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 01:06:47.981981 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 01:06:47.982143 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 23 01:06:47.982162 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:06:47.982170 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:06:47.982177 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 23 01:06:47.982185 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8554e05d, max_idle_ns: 881590540420 ns Jan 23 01:06:47.982192 kernel: Initialise system trusted keyrings Jan 23 01:06:47.982203 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 01:06:47.982210 kernel: Key type asymmetric registered Jan 23 01:06:47.982217 kernel: Asymmetric key parser 'x509' registered Jan 23 01:06:47.982224 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:06:47.982231 kernel: io scheduler mq-deadline registered Jan 23 01:06:47.982238 kernel: io scheduler kyber registered Jan 23 01:06:47.982245 kernel: io scheduler bfq registered Jan 23 01:06:47.982253 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:06:47.982260 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:06:47.982270 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:06:47.982277 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:06:47.982284 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:06:47.982291 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:06:47.982299 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:06:47.982305 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:06:47.982313 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:06:47.982459 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 01:06:47.982840 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 01:06:47.982999 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T01:06:47 UTC (1769130407) Jan 23 01:06:47.983118 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 01:06:47.983129 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 01:06:47.983136 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:06:47.983144 kernel: Segment Routing with IPv6 Jan 23 01:06:47.983151 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:06:47.983158 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:06:47.983165 kernel: Key type dns_resolver registered Jan 23 01:06:47.983176 kernel: IPI shorthand broadcast: enabled Jan 23 01:06:47.983183 kernel: sched_clock: Marking stable (3003003503, 361035720)->(3468679946, -104640723) Jan 23 01:06:47.984791 kernel: registered taskstats version 1 Jan 23 01:06:47.984800 kernel: Loading compiled-in X.509 certificates Jan 23 01:06:47.984808 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:06:47.984816 kernel: Demotion targets for Node 0: null Jan 23 01:06:47.984823 kernel: Key type .fscrypt registered Jan 23 01:06:47.984830 kernel: Key type fscrypt-provisioning registered Jan 23 01:06:47.984837 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:06:47.984848 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:06:47.984855 kernel: ima: No architecture policies found Jan 23 01:06:47.984862 kernel: clk: Disabling unused clocks Jan 23 01:06:47.984869 kernel: Warning: unable to open an initial console. Jan 23 01:06:47.984877 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:06:47.984884 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:06:47.984891 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:06:47.984898 kernel: Run /init as init process Jan 23 01:06:47.984905 kernel: with arguments: Jan 23 01:06:47.984915 kernel: /init Jan 23 01:06:47.984922 kernel: with environment: Jan 23 01:06:47.984943 kernel: HOME=/ Jan 23 01:06:47.984953 kernel: TERM=linux Jan 23 01:06:47.984961 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:06:47.984972 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:06:47.984981 systemd[1]: Detected virtualization kvm. Jan 23 01:06:47.984991 systemd[1]: Detected architecture x86-64. Jan 23 01:06:47.984998 systemd[1]: Running in initrd. Jan 23 01:06:47.985006 systemd[1]: No hostname configured, using default hostname. Jan 23 01:06:47.985014 systemd[1]: Hostname set to . Jan 23 01:06:47.985022 systemd[1]: Initializing machine ID from random generator. Jan 23 01:06:47.985029 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:06:47.985037 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:06:47.985045 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:06:47.985056 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:06:47.985064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:06:47.985072 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:06:47.985081 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:06:47.985090 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:06:47.985098 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:06:47.985106 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:06:47.985116 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:06:47.985123 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:06:47.985131 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:06:47.985139 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:06:47.985147 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:06:47.985154 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:06:47.985162 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:06:47.985170 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:06:47.985178 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:06:47.985188 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:06:47.985196 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:06:47.985206 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:06:47.985214 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:06:47.985222 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:06:47.985232 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:06:47.985240 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:06:47.985248 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:06:47.985256 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:06:47.985266 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:06:47.985274 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:06:47.985282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:47.985289 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:06:47.985300 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:06:47.985308 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:06:47.985340 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 01:06:47.985363 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:06:47.985372 systemd-journald[187]: Journal started Jan 23 01:06:47.985389 systemd-journald[187]: Runtime Journal (/run/log/journal/392f427f7aa1478d80f8a5563c4fbca9) is 8M, max 78.2M, 70.2M free. Jan 23 01:06:47.955584 systemd-modules-load[188]: Inserted module 'overlay' Jan 23 01:06:47.992220 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:06:48.000902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:06:48.125262 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:06:48.125296 kernel: Bridge firewalling registered Jan 23 01:06:48.006846 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 23 01:06:48.132941 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:06:48.135292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:48.137261 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:06:48.139289 systemd-tmpfiles[201]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:06:48.142925 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:06:48.148029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:06:48.150816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:06:48.158939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:06:48.169572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:06:48.172145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:06:48.176912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:06:48.179092 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:06:48.182859 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:06:48.201635 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:06:48.225020 systemd-resolved[223]: Positive Trust Anchors: Jan 23 01:06:48.225033 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:06:48.225060 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:06:48.228351 systemd-resolved[223]: Defaulting to hostname 'linux'. Jan 23 01:06:48.229654 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:06:48.233898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:06:48.302809 kernel: SCSI subsystem initialized Jan 23 01:06:48.312777 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:06:48.324781 kernel: iscsi: registered transport (tcp) Jan 23 01:06:48.347264 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:06:48.347303 kernel: QLogic iSCSI HBA Driver Jan 23 01:06:48.369375 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:06:48.384505 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:06:48.387215 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:06:48.430959 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:06:48.433912 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:06:48.487791 kernel: raid6: avx2x4 gen() 32312 MB/s Jan 23 01:06:48.505993 kernel: raid6: avx2x2 gen() 29217 MB/s Jan 23 01:06:48.524349 kernel: raid6: avx2x1 gen() 20746 MB/s Jan 23 01:06:48.524379 kernel: raid6: using algorithm avx2x4 gen() 32312 MB/s Jan 23 01:06:48.547015 kernel: raid6: .... xor() 4878 MB/s, rmw enabled Jan 23 01:06:48.547051 kernel: raid6: using avx2x2 recovery algorithm Jan 23 01:06:48.569984 kernel: xor: automatically using best checksumming function avx Jan 23 01:06:48.705792 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:06:48.712407 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:06:48.715309 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:06:48.741314 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jan 23 01:06:48.748113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:06:48.751910 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:06:48.775656 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Jan 23 01:06:48.804397 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:06:48.806480 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:06:48.887088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:06:48.891146 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:06:48.955780 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:06:48.975785 kernel: AES CTR mode by8 optimization enabled Jan 23 01:06:48.989807 kernel: libata version 3.00 loaded. Jan 23 01:06:48.993778 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 23 01:06:49.000499 kernel: scsi host0: Virtio SCSI HBA Jan 23 01:06:49.008778 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 01:06:49.012693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:49.012785 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:49.017371 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:49.023087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:49.025051 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:06:49.049970 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:06:49.054123 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:06:49.215809 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:06:49.215866 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:06:49.216104 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:06:49.216252 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:06:49.216394 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 23 01:06:49.221780 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 23 01:06:49.221966 kernel: scsi host1: ahci Jan 23 01:06:49.222124 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 01:06:49.222275 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 23 01:06:49.222424 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 01:06:49.226852 kernel: scsi host2: ahci Jan 23 01:06:49.257801 kernel: scsi host3: ahci Jan 23 01:06:49.258790 kernel: scsi host4: ahci Jan 23 01:06:49.259058 kernel: scsi host5: ahci Jan 23 01:06:49.259216 kernel: scsi host6: ahci Jan 23 01:06:49.259245 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:06:49.259256 kernel: GPT:9289727 != 167739391 Jan 23 01:06:49.259267 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:06:49.259277 kernel: GPT:9289727 != 167739391 Jan 23 01:06:49.259287 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:06:49.259297 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:06:49.260903 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 01:06:49.261097 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 1 Jan 23 01:06:49.261109 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 1 Jan 23 01:06:49.261120 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 1 Jan 23 01:06:49.261130 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 1 Jan 23 01:06:49.261139 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 1 Jan 23 01:06:49.261149 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 1 Jan 23 01:06:49.427777 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:49.566798 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:49.571683 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:49.571706 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:49.574786 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:49.574838 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:49.579984 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:06:49.647431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 01:06:49.657486 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 01:06:49.675618 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 01:06:49.676672 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:06:49.686269 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 01:06:49.687115 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 01:06:49.690422 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:06:49.691442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:06:49.693435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:06:49.696046 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:06:49.699865 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:06:49.719314 disk-uuid[611]: Primary Header is updated. Jan 23 01:06:49.719314 disk-uuid[611]: Secondary Entries is updated. Jan 23 01:06:49.719314 disk-uuid[611]: Secondary Header is updated. Jan 23 01:06:49.730062 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:06:49.733005 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:06:49.752811 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:06:50.751462 disk-uuid[614]: The operation has completed successfully. Jan 23 01:06:50.752656 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:06:50.811127 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:06:50.811249 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:06:50.837964 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:06:50.854507 sh[633]: Success Jan 23 01:06:50.878178 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:06:50.878221 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:06:50.879366 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:06:50.892815 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:06:50.940770 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:06:50.943834 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:06:50.962607 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:06:50.975802 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (645) Jan 23 01:06:50.975832 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:06:50.981883 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:50.994744 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:06:50.994789 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:06:50.994805 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:06:50.999052 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:06:51.000121 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:06:51.001249 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:06:51.002878 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:06:51.005864 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:06:51.041030 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (680) Jan 23 01:06:51.041078 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:51.045214 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:51.058724 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:06:51.058810 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:06:51.058823 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:06:51.066796 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:51.068412 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:06:51.071187 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:06:51.143027 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:06:51.152687 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:06:51.177348 ignition[746]: Ignition 2.22.0 Jan 23 01:06:51.177367 ignition[746]: Stage: fetch-offline Jan 23 01:06:51.177414 ignition[746]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:51.177428 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:06:51.177536 ignition[746]: parsed url from cmdline: "" Jan 23 01:06:51.177542 ignition[746]: no config URL provided Jan 23 01:06:51.177550 ignition[746]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:06:51.177563 ignition[746]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:06:51.183415 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:06:51.177573 ignition[746]: failed to fetch config: resource requires networking Jan 23 01:06:51.177898 ignition[746]: Ignition finished successfully Jan 23 01:06:51.203652 systemd-networkd[819]: lo: Link UP Jan 23 01:06:51.204581 systemd-networkd[819]: lo: Gained carrier Jan 23 01:06:51.206657 systemd-networkd[819]: Enumeration completed Jan 23 01:06:51.207224 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:06:51.207772 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:51.207777 systemd-networkd[819]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:06:51.208947 systemd[1]: Reached target network.target - Network. Jan 23 01:06:51.210217 systemd-networkd[819]: eth0: Link UP Jan 23 01:06:51.210769 systemd-networkd[819]: eth0: Gained carrier Jan 23 01:06:51.210780 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:51.215904 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:06:51.243783 ignition[823]: Ignition 2.22.0 Jan 23 01:06:51.243793 ignition[823]: Stage: fetch Jan 23 01:06:51.243907 ignition[823]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:51.243918 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:06:51.243982 ignition[823]: parsed url from cmdline: "" Jan 23 01:06:51.243986 ignition[823]: no config URL provided Jan 23 01:06:51.243991 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:06:51.244000 ignition[823]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:06:51.244035 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 23 01:06:51.244220 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:06:51.445101 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #2 Jan 23 01:06:51.445329 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:06:51.845649 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #3 Jan 23 01:06:51.845880 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:06:51.927869 systemd-networkd[819]: eth0: DHCPv4 address 172.239.48.230/24, gateway 172.239.48.1 acquired from 23.194.118.56 Jan 23 01:06:52.646826 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #4 Jan 23 01:06:52.744969 ignition[823]: PUT result: OK Jan 23 01:06:52.745048 ignition[823]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 23 01:06:52.790204 systemd-networkd[819]: eth0: Gained IPv6LL Jan 23 01:06:52.859927 ignition[823]: GET result: OK Jan 23 01:06:52.861068 ignition[823]: parsing config with SHA512: 57b23ce4f5a10a7a92d70d0987d3d412a709a6860736dbbb2b08a96ce12f5878144656446d2ae88c77304232b2db42bd1a5b7a79c1a5d1f86cb50189c804fe90 Jan 23 01:06:52.866135 unknown[823]: fetched base config from "system" Jan 23 01:06:52.875849 unknown[823]: fetched base config from "system" Jan 23 01:06:52.876216 ignition[823]: fetch: fetch complete Jan 23 01:06:52.875859 unknown[823]: fetched user config from "akamai" Jan 23 01:06:52.876223 ignition[823]: fetch: fetch passed Jan 23 01:06:52.879535 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:06:52.876303 ignition[823]: Ignition finished successfully Jan 23 01:06:52.898929 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:06:52.949497 ignition[831]: Ignition 2.22.0 Jan 23 01:06:52.949513 ignition[831]: Stage: kargs Jan 23 01:06:52.949687 ignition[831]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:52.949699 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:06:52.952729 ignition[831]: kargs: kargs passed Jan 23 01:06:52.952838 ignition[831]: Ignition finished successfully Jan 23 01:06:52.957275 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:06:52.960411 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:06:53.001831 ignition[837]: Ignition 2.22.0 Jan 23 01:06:53.001849 ignition[837]: Stage: disks Jan 23 01:06:53.002004 ignition[837]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:53.002017 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:06:53.002841 ignition[837]: disks: disks passed Jan 23 01:06:53.007687 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:06:53.002884 ignition[837]: Ignition finished successfully Jan 23 01:06:53.009075 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:06:53.010409 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:06:53.011922 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:06:53.013658 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:06:53.015401 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:06:53.017939 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:06:53.063715 systemd-fsck[845]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 01:06:53.068486 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:06:53.072908 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:06:53.190799 kernel: EXT4-fs (sda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:06:53.191892 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:06:53.193248 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:06:53.195857 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:06:53.198838 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:06:53.199828 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:06:53.199877 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:06:53.199902 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:06:53.215386 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:06:53.217980 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:06:53.226791 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (853) Jan 23 01:06:53.233801 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:53.233849 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:53.239827 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:06:53.239865 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:06:53.244835 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:06:53.247516 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:06:53.292375 initrd-setup-root[877]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:06:53.300089 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:06:53.304976 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:06:53.308411 initrd-setup-root[898]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:06:53.430523 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:06:53.432832 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:06:53.435921 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:06:53.451010 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:06:53.455212 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:53.473055 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:06:53.483099 ignition[967]: INFO : Ignition 2.22.0 Jan 23 01:06:53.484186 ignition[967]: INFO : Stage: mount Jan 23 01:06:53.484186 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:53.484186 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:06:53.486791 ignition[967]: INFO : mount: mount passed Jan 23 01:06:53.486791 ignition[967]: INFO : Ignition finished successfully Jan 23 01:06:53.486449 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:06:53.489855 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:06:54.194331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:06:54.232792 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (977) Jan 23 01:06:54.232837 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:06:54.236110 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:06:54.246373 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:06:54.246461 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:06:54.246476 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:06:54.251676 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:06:54.290572 ignition[993]: INFO : Ignition 2.22.0 Jan 23 01:06:54.290572 ignition[993]: INFO : Stage: files Jan 23 01:06:54.293389 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:54.293389 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:06:54.293389 ignition[993]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:06:54.293389 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:06:54.293389 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:06:54.301935 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:06:54.301935 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:06:54.301935 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:06:54.299344 unknown[993]: wrote ssh authorized keys file for user: core Jan 23 01:06:54.305971 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:06:54.305971 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 01:06:54.463580 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:06:54.696383 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 01:06:54.696383 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:06:54.699599 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:06:54.732637 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:06:54.732637 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:06:54.732637 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 01:06:55.241135 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 01:06:55.926422 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 01:06:55.926422 ignition[993]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 01:06:55.930022 ignition[993]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:06:55.931325 ignition[993]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:06:55.931325 ignition[993]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 01:06:55.931325 ignition[993]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 01:06:55.931325 ignition[993]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 01:06:55.931325 ignition[993]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 01:06:55.931325 ignition[993]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 01:06:55.931325 ignition[993]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:06:55.931325 ignition[993]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:06:55.931325 ignition[993]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:06:55.945279 ignition[993]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:06:55.945279 ignition[993]: INFO : files: files passed Jan 23 01:06:55.945279 ignition[993]: INFO : Ignition finished successfully Jan 23 01:06:55.934945 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:06:55.940778 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:06:55.946282 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:06:55.959101 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:06:55.959211 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:06:55.967779 initrd-setup-root-after-ignition[1023]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:06:55.967779 initrd-setup-root-after-ignition[1023]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:06:55.971152 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:06:55.973172 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:06:55.974527 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:06:55.977051 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:06:56.026996 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:06:56.027170 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:06:56.029617 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:06:56.030592 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:06:56.032295 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:06:56.034021 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:06:56.051091 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:06:56.054493 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:06:56.078534 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:06:56.080284 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:06:56.081221 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:06:56.083519 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:06:56.083701 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:06:56.085750 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:06:56.086949 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:06:56.088676 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:06:56.090155 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:06:56.091837 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:06:56.093530 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:06:56.095266 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:06:56.097094 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:06:56.098916 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:06:56.100679 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:06:56.102441 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:06:56.103894 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:06:56.104001 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:06:56.105774 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:06:56.107048 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:06:56.108545 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:06:56.109049 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:06:56.110501 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:06:56.110659 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:06:56.113126 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:06:56.113288 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:06:56.114592 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:06:56.114728 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:06:56.118140 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:06:56.123671 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:06:56.126093 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:06:56.126768 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:06:56.127697 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:06:56.127859 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:06:56.132238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:06:56.133374 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:06:56.163885 ignition[1047]: INFO : Ignition 2.22.0 Jan 23 01:06:56.163990 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:06:56.191488 ignition[1047]: INFO : Stage: umount Jan 23 01:06:56.191488 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:06:56.191488 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 01:06:56.191488 ignition[1047]: INFO : umount: umount passed Jan 23 01:06:56.191488 ignition[1047]: INFO : Ignition finished successfully Jan 23 01:06:56.169663 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:06:56.169809 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:06:56.191110 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:06:56.191252 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:06:56.192687 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:06:56.193030 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:06:56.195227 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:06:56.195302 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:06:56.197060 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:06:56.197128 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:06:56.198546 systemd[1]: Stopped target network.target - Network. Jan 23 01:06:56.200251 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:06:56.200327 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:06:56.202068 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:06:56.203491 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:06:56.203802 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:06:56.205142 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:06:56.206565 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:06:56.208143 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:06:56.208192 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:06:56.209748 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:06:56.209821 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:06:56.211388 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:06:56.211444 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:06:56.213152 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:06:56.213202 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:06:56.214581 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:06:56.214633 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:06:56.216444 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:06:56.218147 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:06:56.221651 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:06:56.221779 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:06:56.226196 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:06:56.226716 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:06:56.227038 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:06:56.229451 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:06:56.232780 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:06:56.232928 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:06:56.234728 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:06:56.234944 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:06:56.236252 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:06:56.236294 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:06:56.238547 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:06:56.240296 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:06:56.240359 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:06:56.242363 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:06:56.242426 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:06:56.245524 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:06:56.245576 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:06:56.247707 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:06:56.254359 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:06:56.267038 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:06:56.267211 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:06:56.268476 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:06:56.268585 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:06:56.269999 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:06:56.270067 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:06:56.271199 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:06:56.271238 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:06:56.272929 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:06:56.272981 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:06:56.275172 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:06:56.275222 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:06:56.276771 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:06:56.276827 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:06:56.279857 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:06:56.281034 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:06:56.281086 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:06:56.283610 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:06:56.283663 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:06:56.286340 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 01:06:56.286389 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:06:56.289845 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:06:56.289894 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:06:56.290679 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:06:56.290727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:56.298186 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:06:56.298295 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:06:56.299795 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:06:56.302219 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:06:56.323689 systemd[1]: Switching root. Jan 23 01:06:56.361323 systemd-journald[187]: Journal stopped Jan 23 01:06:57.622787 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 01:06:57.622816 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:06:57.622828 kernel: SELinux: policy capability open_perms=1 Jan 23 01:06:57.622838 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:06:57.622847 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:06:57.622858 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:06:57.622868 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:06:57.622877 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:06:57.622886 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:06:57.622896 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:06:57.622905 kernel: audit: type=1403 audit(1769130416.554:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:06:57.622915 systemd[1]: Successfully loaded SELinux policy in 96.066ms. Jan 23 01:06:57.622928 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.819ms. Jan 23 01:06:57.622941 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:06:57.622953 systemd[1]: Detected virtualization kvm. Jan 23 01:06:57.622965 systemd[1]: Detected architecture x86-64. Jan 23 01:06:57.622976 systemd[1]: Detected first boot. Jan 23 01:06:57.622987 systemd[1]: Initializing machine ID from random generator. Jan 23 01:06:57.622996 kernel: Guest personality initialized and is inactive Jan 23 01:06:57.623006 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:06:57.623015 kernel: Initialized host personality Jan 23 01:06:57.623025 zram_generator::config[1091]: No configuration found. Jan 23 01:06:57.623035 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:06:57.623045 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:06:57.623058 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:06:57.623068 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:06:57.623078 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:06:57.623088 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:06:57.623098 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:06:57.623108 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:06:57.623118 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:06:57.623130 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:06:57.623140 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:06:57.623151 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:06:57.623161 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:06:57.623171 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:06:57.623182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:06:57.623192 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:06:57.623202 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:06:57.623215 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:06:57.623228 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:06:57.623238 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:06:57.623249 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:06:57.623259 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:06:57.623269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:06:57.623279 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:06:57.623292 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:06:57.623302 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:06:57.623312 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:06:57.623322 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:06:57.623332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:06:57.623343 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:06:57.623353 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:06:57.623363 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:06:57.623374 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:06:57.623386 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:06:57.623397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:06:57.623408 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:06:57.623418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:06:57.623430 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:06:57.623441 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:06:57.623451 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:06:57.623461 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:06:57.623472 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:57.623482 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:06:57.623492 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:06:57.623502 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:06:57.623515 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:06:57.623525 systemd[1]: Reached target machines.target - Containers. Jan 23 01:06:57.623536 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:06:57.623546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:57.623556 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:06:57.623566 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:06:57.623577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:06:57.623587 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:06:57.623597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:06:57.623609 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:06:57.623620 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:06:57.623631 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:06:57.623641 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:06:57.623652 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:06:57.623661 kernel: ACPI: bus type drm_connector registered Jan 23 01:06:57.623672 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:06:57.624171 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:06:57.624190 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:57.624200 kernel: loop: module loaded Jan 23 01:06:57.624210 kernel: fuse: init (API version 7.41) Jan 23 01:06:57.624220 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:06:57.624231 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:06:57.624241 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:06:57.624251 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:06:57.624262 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:06:57.624297 systemd-journald[1180]: Collecting audit messages is disabled. Jan 23 01:06:57.624318 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:06:57.624331 systemd-journald[1180]: Journal started Jan 23 01:06:57.624353 systemd-journald[1180]: Runtime Journal (/run/log/journal/6b42a8cd35d84ae891feba0bbe1a46a5) is 8M, max 78.2M, 70.2M free. Jan 23 01:06:57.228595 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:06:57.255550 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 01:06:57.256273 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:06:57.632023 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:06:57.633881 systemd[1]: Stopped verity-setup.service. Jan 23 01:06:57.639795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:57.645830 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:06:57.647370 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:06:57.648255 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:06:57.649153 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:06:57.650069 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:06:57.651061 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:06:57.652082 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:06:57.653207 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:06:57.654338 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:06:57.655476 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:06:57.655746 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:06:57.657010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:06:57.657262 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:06:57.658511 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:06:57.658823 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:06:57.659969 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:06:57.660233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:06:57.661374 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:06:57.661630 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:06:57.663076 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:06:57.663361 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:06:57.664922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:06:57.666315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:06:57.667680 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:06:57.669342 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:06:57.683210 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:06:57.685871 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:06:57.690834 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:06:57.691594 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:06:57.691620 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:06:57.696001 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:06:57.705076 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:06:57.708315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:57.712874 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:06:57.722879 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:06:57.724857 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:06:57.726877 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:06:57.729872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:06:57.732850 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:06:57.737932 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:06:57.746899 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:06:57.752108 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:06:57.754975 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:06:57.776407 systemd-journald[1180]: Time spent on flushing to /var/log/journal/6b42a8cd35d84ae891feba0bbe1a46a5 is 31.758ms for 1006 entries. Jan 23 01:06:57.776407 systemd-journald[1180]: System Journal (/var/log/journal/6b42a8cd35d84ae891feba0bbe1a46a5) is 8M, max 195.6M, 187.6M free. Jan 23 01:06:57.837850 systemd-journald[1180]: Received client request to flush runtime journal. Jan 23 01:06:57.837937 kernel: loop0: detected capacity change from 0 to 8 Jan 23 01:06:57.787323 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:06:57.790380 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:06:57.848603 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:06:57.795433 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:06:57.797677 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:06:57.833564 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:06:57.846519 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 23 01:06:57.846536 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 23 01:06:57.849191 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:06:57.854588 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:06:57.861474 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:06:57.865245 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:06:57.876512 kernel: loop1: detected capacity change from 0 to 110984 Jan 23 01:06:57.914785 kernel: loop2: detected capacity change from 0 to 224512 Jan 23 01:06:57.930627 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:06:57.934057 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:06:57.958864 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 01:06:57.982066 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 23 01:06:57.982404 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 23 01:06:57.987339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:06:58.005803 kernel: loop4: detected capacity change from 0 to 8 Jan 23 01:06:58.011102 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 01:06:58.028977 kernel: loop6: detected capacity change from 0 to 224512 Jan 23 01:06:58.056799 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 01:06:58.072206 (sd-merge)[1247]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jan 23 01:06:58.072892 (sd-merge)[1247]: Merged extensions into '/usr'. Jan 23 01:06:58.078916 systemd[1]: Reload requested from client PID 1217 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:06:58.079092 systemd[1]: Reloading... Jan 23 01:06:58.180794 zram_generator::config[1273]: No configuration found. Jan 23 01:06:58.272791 ldconfig[1212]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:06:58.378825 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:06:58.379506 systemd[1]: Reloading finished in 299 ms. Jan 23 01:06:58.412431 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:06:58.414159 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:06:58.415536 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:06:58.429123 systemd[1]: Starting ensure-sysext.service... Jan 23 01:06:58.431873 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:06:58.435334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:06:58.457193 systemd[1]: Reload requested from client PID 1319 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:06:58.457212 systemd[1]: Reloading... Jan 23 01:06:58.481499 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Jan 23 01:06:58.483447 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:06:58.483894 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:06:58.484315 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:06:58.484690 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:06:58.486869 systemd-tmpfiles[1320]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:06:58.487207 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Jan 23 01:06:58.487648 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Jan 23 01:06:58.498459 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:06:58.498651 systemd-tmpfiles[1320]: Skipping /boot Jan 23 01:06:58.521413 systemd-tmpfiles[1320]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:06:58.521426 systemd-tmpfiles[1320]: Skipping /boot Jan 23 01:06:58.560834 zram_generator::config[1352]: No configuration found. Jan 23 01:06:58.806778 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:06:58.809472 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:06:58.809902 systemd[1]: Reloading finished in 352 ms. Jan 23 01:06:58.823710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:06:58.837981 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:06:58.856663 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:06:58.860803 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 01:06:58.863902 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:06:58.871152 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:06:58.875013 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:06:58.880076 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:06:58.885943 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:06:58.888573 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:06:58.909133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:58.909299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:58.915823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:06:58.921812 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:06:58.924847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:06:58.926818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:58.926917 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:58.926996 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:58.938039 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:06:58.948537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:58.949991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:58.950148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:58.950224 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:58.950295 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:58.957385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:58.957617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:06:58.966003 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:06:58.968343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:06:58.968436 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:06:58.968548 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:06:58.980821 systemd[1]: Finished ensure-sysext.service. Jan 23 01:06:58.987241 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 01:06:58.989884 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:06:59.015654 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:06:59.018137 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:06:59.043820 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:06:59.045212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:06:59.046549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:06:59.049164 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:06:59.049396 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:06:59.058621 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:06:59.061730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:06:59.064306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:06:59.066987 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:06:59.067073 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:06:59.070088 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:06:59.072857 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:06:59.073147 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:06:59.075904 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:06:59.079003 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:06:59.094817 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:06:59.111616 augenrules[1487]: No rules Jan 23 01:06:59.116947 kernel: EDAC MC: Ver: 3.0.0 Jan 23 01:06:59.119068 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:06:59.119395 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:06:59.175150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 01:06:59.185892 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:06:59.194490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:06:59.211239 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:06:59.332553 systemd-networkd[1431]: lo: Link UP Jan 23 01:06:59.334942 systemd-networkd[1431]: lo: Gained carrier Jan 23 01:06:59.336962 systemd-networkd[1431]: Enumeration completed Jan 23 01:06:59.340207 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:59.340293 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:06:59.341887 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:06:59.342619 systemd-networkd[1431]: eth0: Link UP Jan 23 01:06:59.342816 systemd-networkd[1431]: eth0: Gained carrier Jan 23 01:06:59.342830 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:06:59.369223 systemd-resolved[1432]: Positive Trust Anchors: Jan 23 01:06:59.369235 systemd-resolved[1432]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:06:59.369261 systemd-resolved[1432]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:06:59.377045 systemd-resolved[1432]: Defaulting to hostname 'linux'. Jan 23 01:06:59.439336 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:06:59.440208 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 01:06:59.441344 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:06:59.442982 systemd[1]: Reached target network.target - Network. Jan 23 01:06:59.443667 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:06:59.444438 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:06:59.445284 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:06:59.446347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:06:59.447302 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:06:59.448071 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:06:59.448996 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:06:59.449045 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:06:59.449714 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:06:59.450841 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:06:59.451653 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:06:59.452417 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:06:59.454445 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:06:59.456648 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:06:59.459354 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:06:59.460256 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:06:59.461019 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:06:59.463680 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:06:59.465127 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:06:59.467132 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:06:59.468851 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:06:59.471334 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:06:59.472910 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:06:59.473742 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:06:59.474485 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:06:59.474517 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:06:59.480832 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:06:59.483858 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:06:59.487148 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:06:59.490047 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:06:59.493860 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:06:59.498082 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:06:59.499815 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:06:59.508869 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:06:59.512866 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:06:59.514749 jq[1518]: false Jan 23 01:06:59.516170 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:06:59.549120 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:06:59.550903 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 23 01:06:59.551138 oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 23 01:06:59.554696 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:06:59.556485 extend-filesystems[1519]: Found /dev/sda6 Jan 23 01:06:59.554822 oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 23 01:06:59.557583 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 23 01:06:59.557583 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:06:59.557583 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 23 01:06:59.557583 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 23 01:06:59.557583 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:06:59.554836 oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:06:59.554874 oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 23 01:06:59.555307 oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 23 01:06:59.555316 oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:06:59.562941 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:06:59.564529 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:06:59.565898 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:06:59.567718 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:06:59.572282 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:06:59.579117 extend-filesystems[1519]: Found /dev/sda9 Jan 23 01:06:59.582113 extend-filesystems[1519]: Checking size of /dev/sda9 Jan 23 01:06:59.583992 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:06:59.588586 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:06:59.590709 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:06:59.591073 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:06:59.591388 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:06:59.591604 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:06:59.605178 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:06:59.605498 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:06:59.607381 jq[1534]: true Jan 23 01:06:59.612157 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:06:59.612474 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:06:59.628592 update_engine[1533]: I20260123 01:06:59.628208 1533 main.cc:92] Flatcar Update Engine starting Jan 23 01:06:59.632941 coreos-metadata[1515]: Jan 23 01:06:59.630 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 01:06:59.645992 extend-filesystems[1519]: Resized partition /dev/sda9 Jan 23 01:06:59.656730 jq[1552]: true Jan 23 01:06:59.656997 tar[1545]: linux-amd64/LICENSE Jan 23 01:06:59.656997 tar[1545]: linux-amd64/helm Jan 23 01:06:59.658098 (ntainerd)[1562]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:06:59.663631 extend-filesystems[1566]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:06:59.677811 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jan 23 01:06:59.694286 dbus-daemon[1516]: [system] SELinux support is enabled Jan 23 01:06:59.694464 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:06:59.700208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:06:59.700243 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:06:59.701404 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:06:59.701423 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:06:59.716666 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:06:59.719445 update_engine[1533]: I20260123 01:06:59.719376 1533 update_check_scheduler.cc:74] Next update check in 5m4s Jan 23 01:06:59.735034 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:06:59.769346 systemd-logind[1531]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:06:59.769383 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:06:59.771840 systemd-logind[1531]: New seat seat0. Jan 23 01:06:59.778744 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:06:59.812998 bash[1585]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:06:59.817624 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:06:59.826198 systemd[1]: Starting sshkeys.service... Jan 23 01:06:59.906816 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:06:59.910011 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:06:59.957784 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jan 23 01:06:59.957835 containerd[1562]: time="2026-01-23T01:06:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:06:59.969167 containerd[1562]: time="2026-01-23T01:06:59.968967413Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:06:59.969793 extend-filesystems[1566]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 01:06:59.969793 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 23 01:06:59.969793 extend-filesystems[1566]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jan 23 01:06:59.984246 extend-filesystems[1519]: Resized filesystem in /dev/sda9 Jan 23 01:06:59.978517 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:06:59.978789 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:07:00.010423 containerd[1562]: time="2026-01-23T01:07:00.010384275Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.31µs" Jan 23 01:07:00.010423 containerd[1562]: time="2026-01-23T01:07:00.010421146Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:07:00.010481 containerd[1562]: time="2026-01-23T01:07:00.010442926Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:07:00.010644 containerd[1562]: time="2026-01-23T01:07:00.010622806Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:07:00.010663 containerd[1562]: time="2026-01-23T01:07:00.010651866Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:07:00.010696 containerd[1562]: time="2026-01-23T01:07:00.010687316Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:07:00.010798 containerd[1562]: time="2026-01-23T01:07:00.010773776Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:07:00.010822 containerd[1562]: time="2026-01-23T01:07:00.010796966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:07:00.011074 containerd[1562]: time="2026-01-23T01:07:00.011051477Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:07:00.011095 containerd[1562]: time="2026-01-23T01:07:00.011072977Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:07:00.011095 containerd[1562]: time="2026-01-23T01:07:00.011088037Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:07:00.011133 containerd[1562]: time="2026-01-23T01:07:00.011099277Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:07:00.011213 containerd[1562]: time="2026-01-23T01:07:00.011194727Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:07:00.011445 containerd[1562]: time="2026-01-23T01:07:00.011425408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:07:00.012767 containerd[1562]: time="2026-01-23T01:07:00.011467848Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:07:00.012767 containerd[1562]: time="2026-01-23T01:07:00.011480568Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:07:00.012767 containerd[1562]: time="2026-01-23T01:07:00.011502718Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:07:00.016770 containerd[1562]: time="2026-01-23T01:07:00.016001357Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:07:00.016770 containerd[1562]: time="2026-01-23T01:07:00.016104247Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:07:00.030541 coreos-metadata[1593]: Jan 23 01:07:00.030 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033530672Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033569762Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033583452Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033603002Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033613562Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033622922Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033633672Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033643962Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033653892Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033662932Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033671132Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033680972Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033833152Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:07:00.035164 containerd[1562]: time="2026-01-23T01:07:00.033852352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033864752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033875922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033890662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033899952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033909972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033919292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033928893Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033938563Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033947583Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033987463Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.033998603Z" level=info msg="Start snapshots syncer" Jan 23 01:07:00.035443 containerd[1562]: time="2026-01-23T01:07:00.034014873Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:07:00.035637 containerd[1562]: time="2026-01-23T01:07:00.034214303Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:07:00.035637 containerd[1562]: time="2026-01-23T01:07:00.034250243Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034287613Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034388073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034416683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034426864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034440284Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034451384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034462034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034475984Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034494304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034504044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034513074Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034540084Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034552634Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:07:00.035741 containerd[1562]: time="2026-01-23T01:07:00.034560684Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:07:00.035976 containerd[1562]: time="2026-01-23T01:07:00.034569504Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:07:00.035976 containerd[1562]: time="2026-01-23T01:07:00.034576544Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:07:00.035976 containerd[1562]: time="2026-01-23T01:07:00.034584654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:07:00.035976 containerd[1562]: time="2026-01-23T01:07:00.034599504Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:07:00.035976 containerd[1562]: time="2026-01-23T01:07:00.034614084Z" level=info msg="runtime interface created" Jan 23 01:07:00.035976 containerd[1562]: time="2026-01-23T01:07:00.034619404Z" level=info msg="created NRI interface" Jan 23 01:07:00.035976 containerd[1562]: time="2026-01-23T01:07:00.034626434Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:07:00.035976 containerd[1562]: time="2026-01-23T01:07:00.034636174Z" level=info msg="Connect containerd service" Jan 23 01:07:00.035976 containerd[1562]: time="2026-01-23T01:07:00.034674224Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:07:00.041306 containerd[1562]: time="2026-01-23T01:07:00.041150197Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:07:00.053830 systemd-networkd[1431]: eth0: DHCPv4 address 172.239.48.230/24, gateway 172.239.48.1 acquired from 23.194.118.56 Jan 23 01:07:00.054855 dbus-daemon[1516]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1431 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 01:07:00.058328 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. Jan 23 01:07:00.060126 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 01:07:00.092507 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:07:00.164922 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:07:00.174948 containerd[1562]: time="2026-01-23T01:07:00.174582654Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:07:00.174948 containerd[1562]: time="2026-01-23T01:07:00.174649974Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:07:00.174948 containerd[1562]: time="2026-01-23T01:07:00.174676604Z" level=info msg="Start subscribing containerd event" Jan 23 01:07:00.174948 containerd[1562]: time="2026-01-23T01:07:00.174698584Z" level=info msg="Start recovering state" Jan 23 01:07:00.179359 containerd[1562]: time="2026-01-23T01:07:00.178866152Z" level=info msg="Start event monitor" Jan 23 01:07:00.179359 containerd[1562]: time="2026-01-23T01:07:00.179357003Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:07:00.179416 containerd[1562]: time="2026-01-23T01:07:00.179365343Z" level=info msg="Start streaming server" Jan 23 01:07:00.179416 containerd[1562]: time="2026-01-23T01:07:00.179380253Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:07:00.179416 containerd[1562]: time="2026-01-23T01:07:00.179387043Z" level=info msg="runtime interface starting up..." Jan 23 01:07:00.179416 containerd[1562]: time="2026-01-23T01:07:00.179392753Z" level=info msg="starting plugins..." Jan 23 01:07:00.179778 containerd[1562]: time="2026-01-23T01:07:00.179736484Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:07:00.180978 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:07:00.595240 systemd-resolved[1432]: Clock change detected. Flushing caches. Jan 23 01:07:00.596008 systemd-timesyncd[1450]: Contacted time server 138.89.14.60:123 (0.flatcar.pool.ntp.org). Jan 23 01:07:00.596061 systemd-timesyncd[1450]: Initial clock synchronization to Fri 2026-01-23 01:07:00.595206 UTC. Jan 23 01:07:00.597563 containerd[1562]: time="2026-01-23T01:07:00.597537645Z" level=info msg="containerd successfully booted in 0.228395s" Jan 23 01:07:00.601941 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:07:00.607410 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:07:00.611668 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 01:07:00.612161 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 01:07:00.612918 dbus-daemon[1516]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1609 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 01:07:00.618472 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 01:07:00.633289 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:07:00.633566 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:07:00.639216 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:07:00.671314 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:07:00.678159 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:07:00.685407 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:07:00.687359 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:07:00.730309 polkitd[1629]: Started polkitd version 126 Jan 23 01:07:00.733936 polkitd[1629]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 01:07:00.734410 polkitd[1629]: Loading rules from directory /run/polkit-1/rules.d Jan 23 01:07:00.734460 polkitd[1629]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:07:00.734663 polkitd[1629]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 01:07:00.734690 polkitd[1629]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 01:07:00.734724 polkitd[1629]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 01:07:00.735650 polkitd[1629]: Finished loading, compiling and executing 2 rules Jan 23 01:07:00.735882 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 01:07:00.736470 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 01:07:00.737603 polkitd[1629]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 01:07:00.750817 systemd-resolved[1432]: System hostname changed to '172-239-48-230'. Jan 23 01:07:00.750926 systemd-hostnamed[1609]: Hostname set to <172-239-48-230> (transient) Jan 23 01:07:00.760378 tar[1545]: linux-amd64/README.md Jan 23 01:07:00.773357 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:07:01.011148 systemd-networkd[1431]: eth0: Gained IPv6LL Jan 23 01:07:01.014261 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:07:01.015772 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:07:01.020029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:01.024198 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:07:01.052342 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:07:01.057190 coreos-metadata[1515]: Jan 23 01:07:01.057 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 01:07:01.149306 coreos-metadata[1515]: Jan 23 01:07:01.149 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 23 01:07:01.338045 coreos-metadata[1515]: Jan 23 01:07:01.337 INFO Fetch successful Jan 23 01:07:01.338045 coreos-metadata[1515]: Jan 23 01:07:01.337 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 23 01:07:01.453558 coreos-metadata[1593]: Jan 23 01:07:01.453 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 01:07:01.545702 coreos-metadata[1593]: Jan 23 01:07:01.545 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 23 01:07:01.597992 coreos-metadata[1515]: Jan 23 01:07:01.597 INFO Fetch successful Jan 23 01:07:01.680312 coreos-metadata[1593]: Jan 23 01:07:01.680 INFO Fetch successful Jan 23 01:07:01.709750 update-ssh-keys[1677]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:07:01.711434 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:07:01.714770 systemd[1]: Finished sshkeys.service. Jan 23 01:07:01.721509 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:07:01.722715 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:07:01.897330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:01.898567 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:07:01.905315 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:07:01.905496 systemd[1]: Startup finished in 3.079s (kernel) + 8.862s (initrd) + 5.031s (userspace) = 16.973s. Jan 23 01:07:02.389198 kubelet[1692]: E0123 01:07:02.389071 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:07:02.392255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:07:02.392447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:07:02.392812 systemd[1]: kubelet.service: Consumed 843ms CPU time, 265.9M memory peak. Jan 23 01:07:03.159781 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:07:03.160921 systemd[1]: Started sshd@0-172.239.48.230:22-68.220.241.50:60124.service - OpenSSH per-connection server daemon (68.220.241.50:60124). Jan 23 01:07:03.369419 sshd[1704]: Accepted publickey for core from 68.220.241.50 port 60124 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:07:03.371075 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:03.377101 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:07:03.378442 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:07:03.389025 systemd-logind[1531]: New session 1 of user core. Jan 23 01:07:03.395626 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:07:03.398533 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:07:03.414350 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:07:03.416881 systemd-logind[1531]: New session c1 of user core. Jan 23 01:07:03.541902 systemd[1709]: Queued start job for default target default.target. Jan 23 01:07:03.552331 systemd[1709]: Created slice app.slice - User Application Slice. Jan 23 01:07:03.552359 systemd[1709]: Reached target paths.target - Paths. Jan 23 01:07:03.552402 systemd[1709]: Reached target timers.target - Timers. Jan 23 01:07:03.554028 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:07:03.564821 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:07:03.565172 systemd[1709]: Reached target sockets.target - Sockets. Jan 23 01:07:03.565291 systemd[1709]: Reached target basic.target - Basic System. Jan 23 01:07:03.565462 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:07:03.565558 systemd[1709]: Reached target default.target - Main User Target. Jan 23 01:07:03.566755 systemd[1709]: Startup finished in 144ms. Jan 23 01:07:03.573132 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:07:03.734820 systemd[1]: Started sshd@1-172.239.48.230:22-68.220.241.50:60140.service - OpenSSH per-connection server daemon (68.220.241.50:60140). Jan 23 01:07:03.916867 sshd[1720]: Accepted publickey for core from 68.220.241.50 port 60140 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:07:03.918333 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:03.923893 systemd-logind[1531]: New session 2 of user core. Jan 23 01:07:03.930137 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:07:04.063384 sshd[1723]: Connection closed by 68.220.241.50 port 60140 Jan 23 01:07:04.064934 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:04.068334 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:07:04.069074 systemd[1]: sshd@1-172.239.48.230:22-68.220.241.50:60140.service: Deactivated successfully. Jan 23 01:07:04.070850 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:07:04.072279 systemd-logind[1531]: Removed session 2. Jan 23 01:07:04.096157 systemd[1]: Started sshd@2-172.239.48.230:22-68.220.241.50:60156.service - OpenSSH per-connection server daemon (68.220.241.50:60156). Jan 23 01:07:04.268016 sshd[1729]: Accepted publickey for core from 68.220.241.50 port 60156 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:07:04.269142 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:04.274337 systemd-logind[1531]: New session 3 of user core. Jan 23 01:07:04.285126 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:07:04.393555 sshd[1732]: Connection closed by 68.220.241.50 port 60156 Jan 23 01:07:04.394360 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:04.398808 systemd[1]: sshd@2-172.239.48.230:22-68.220.241.50:60156.service: Deactivated successfully. Jan 23 01:07:04.400916 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:07:04.401632 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:07:04.402818 systemd-logind[1531]: Removed session 3. Jan 23 01:07:04.429590 systemd[1]: Started sshd@3-172.239.48.230:22-68.220.241.50:60162.service - OpenSSH per-connection server daemon (68.220.241.50:60162). Jan 23 01:07:04.613026 sshd[1738]: Accepted publickey for core from 68.220.241.50 port 60162 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:07:04.614787 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:04.621190 systemd-logind[1531]: New session 4 of user core. Jan 23 01:07:04.631124 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:07:04.768585 sshd[1741]: Connection closed by 68.220.241.50 port 60162 Jan 23 01:07:04.770137 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:04.774636 systemd[1]: sshd@3-172.239.48.230:22-68.220.241.50:60162.service: Deactivated successfully. Jan 23 01:07:04.776727 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:07:04.777855 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:07:04.779221 systemd-logind[1531]: Removed session 4. Jan 23 01:07:04.803611 systemd[1]: Started sshd@4-172.239.48.230:22-68.220.241.50:60174.service - OpenSSH per-connection server daemon (68.220.241.50:60174). Jan 23 01:07:04.979753 sshd[1747]: Accepted publickey for core from 68.220.241.50 port 60174 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:07:04.981399 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:04.986819 systemd-logind[1531]: New session 5 of user core. Jan 23 01:07:04.997312 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:07:05.104343 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:07:05.104685 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:07:05.122031 sudo[1751]: pam_unix(sudo:session): session closed for user root Jan 23 01:07:05.145497 sshd[1750]: Connection closed by 68.220.241.50 port 60174 Jan 23 01:07:05.147162 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:05.151042 systemd[1]: sshd@4-172.239.48.230:22-68.220.241.50:60174.service: Deactivated successfully. Jan 23 01:07:05.153032 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:07:05.154276 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:07:05.156025 systemd-logind[1531]: Removed session 5. Jan 23 01:07:05.172946 systemd[1]: Started sshd@5-172.239.48.230:22-68.220.241.50:60176.service - OpenSSH per-connection server daemon (68.220.241.50:60176). Jan 23 01:07:05.337746 sshd[1757]: Accepted publickey for core from 68.220.241.50 port 60176 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:07:05.339381 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:05.345195 systemd-logind[1531]: New session 6 of user core. Jan 23 01:07:05.354142 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:07:05.447579 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:07:05.447899 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:07:05.452654 sudo[1762]: pam_unix(sudo:session): session closed for user root Jan 23 01:07:05.458276 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:07:05.458580 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:07:05.468930 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:07:05.504264 augenrules[1784]: No rules Jan 23 01:07:05.505641 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:07:05.505891 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:07:05.506691 sudo[1761]: pam_unix(sudo:session): session closed for user root Jan 23 01:07:05.527601 sshd[1760]: Connection closed by 68.220.241.50 port 60176 Jan 23 01:07:05.527972 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:05.531657 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:07:05.532037 systemd[1]: sshd@5-172.239.48.230:22-68.220.241.50:60176.service: Deactivated successfully. Jan 23 01:07:05.533846 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:07:05.535830 systemd-logind[1531]: Removed session 6. Jan 23 01:07:05.559140 systemd[1]: Started sshd@6-172.239.48.230:22-68.220.241.50:60188.service - OpenSSH per-connection server daemon (68.220.241.50:60188). Jan 23 01:07:05.720812 sshd[1793]: Accepted publickey for core from 68.220.241.50 port 60188 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:07:05.722656 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:07:05.727809 systemd-logind[1531]: New session 7 of user core. Jan 23 01:07:05.737107 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:07:05.869386 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:07:05.869714 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:07:06.148767 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:07:06.156521 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:07:06.360455 dockerd[1814]: time="2026-01-23T01:07:06.360399508Z" level=info msg="Starting up" Jan 23 01:07:06.361416 dockerd[1814]: time="2026-01-23T01:07:06.361399240Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:07:06.372551 dockerd[1814]: time="2026-01-23T01:07:06.372526403Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:07:06.385252 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport509111879-merged.mount: Deactivated successfully. Jan 23 01:07:06.450664 dockerd[1814]: time="2026-01-23T01:07:06.450324438Z" level=info msg="Loading containers: start." Jan 23 01:07:06.461001 kernel: Initializing XFRM netlink socket Jan 23 01:07:06.715490 systemd-networkd[1431]: docker0: Link UP Jan 23 01:07:06.717841 dockerd[1814]: time="2026-01-23T01:07:06.717816713Z" level=info msg="Loading containers: done." Jan 23 01:07:06.731114 dockerd[1814]: time="2026-01-23T01:07:06.731080810Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:07:06.731234 dockerd[1814]: time="2026-01-23T01:07:06.731137600Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:07:06.731234 dockerd[1814]: time="2026-01-23T01:07:06.731219070Z" level=info msg="Initializing buildkit" Jan 23 01:07:06.751563 dockerd[1814]: time="2026-01-23T01:07:06.751541500Z" level=info msg="Completed buildkit initialization" Jan 23 01:07:06.757493 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:07:06.757628 dockerd[1814]: time="2026-01-23T01:07:06.757097152Z" level=info msg="Daemon has completed initialization" Jan 23 01:07:06.757797 dockerd[1814]: time="2026-01-23T01:07:06.757681883Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:07:07.329821 containerd[1562]: time="2026-01-23T01:07:07.329757887Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 01:07:07.383652 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2697880587-merged.mount: Deactivated successfully. Jan 23 01:07:07.938039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934049343.mount: Deactivated successfully. Jan 23 01:07:08.973499 containerd[1562]: time="2026-01-23T01:07:08.973452083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:08.974853 containerd[1562]: time="2026-01-23T01:07:08.974812416Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070653" Jan 23 01:07:08.974960 containerd[1562]: time="2026-01-23T01:07:08.974938356Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:08.977096 containerd[1562]: time="2026-01-23T01:07:08.977063600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:08.977871 containerd[1562]: time="2026-01-23T01:07:08.977837692Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.648047005s" Jan 23 01:07:08.977905 containerd[1562]: time="2026-01-23T01:07:08.977876162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 01:07:08.978579 containerd[1562]: time="2026-01-23T01:07:08.978558423Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 01:07:10.372385 containerd[1562]: time="2026-01-23T01:07:10.372333770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:10.373400 containerd[1562]: time="2026-01-23T01:07:10.373150352Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993360" Jan 23 01:07:10.373952 containerd[1562]: time="2026-01-23T01:07:10.373923334Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:10.375818 containerd[1562]: time="2026-01-23T01:07:10.375788517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:10.376880 containerd[1562]: time="2026-01-23T01:07:10.376858579Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.398273866s" Jan 23 01:07:10.376955 containerd[1562]: time="2026-01-23T01:07:10.376941410Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 01:07:10.377951 containerd[1562]: time="2026-01-23T01:07:10.377919192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 01:07:11.402490 containerd[1562]: time="2026-01-23T01:07:11.402446160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:11.403611 containerd[1562]: time="2026-01-23T01:07:11.403587532Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405082" Jan 23 01:07:11.404616 containerd[1562]: time="2026-01-23T01:07:11.404572504Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:11.407005 containerd[1562]: time="2026-01-23T01:07:11.406985339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:11.407828 containerd[1562]: time="2026-01-23T01:07:11.407711721Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.029759129s" Jan 23 01:07:11.407828 containerd[1562]: time="2026-01-23T01:07:11.407737951Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 01:07:11.408303 containerd[1562]: time="2026-01-23T01:07:11.408276872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 01:07:12.362653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781962612.mount: Deactivated successfully. Jan 23 01:07:12.499000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:07:12.501105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:12.751092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:12.760310 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:07:12.805587 kubelet[2106]: E0123 01:07:12.805439 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:07:12.811507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:07:12.811844 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:07:12.812398 systemd[1]: kubelet.service: Consumed 197ms CPU time, 110.6M memory peak. Jan 23 01:07:12.834939 containerd[1562]: time="2026-01-23T01:07:12.834389313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:12.834939 containerd[1562]: time="2026-01-23T01:07:12.834918394Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161905" Jan 23 01:07:12.835423 containerd[1562]: time="2026-01-23T01:07:12.835402445Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:12.836658 containerd[1562]: time="2026-01-23T01:07:12.836638898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:12.837211 containerd[1562]: time="2026-01-23T01:07:12.837184149Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.428882987s" Jan 23 01:07:12.837245 containerd[1562]: time="2026-01-23T01:07:12.837214159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 01:07:12.837678 containerd[1562]: time="2026-01-23T01:07:12.837653770Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 01:07:13.354577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846222176.mount: Deactivated successfully. Jan 23 01:07:14.007063 containerd[1562]: time="2026-01-23T01:07:14.006916828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:14.007774 containerd[1562]: time="2026-01-23T01:07:14.007754050Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565247" Jan 23 01:07:14.008086 containerd[1562]: time="2026-01-23T01:07:14.008033290Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:14.010002 containerd[1562]: time="2026-01-23T01:07:14.009941714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:14.010791 containerd[1562]: time="2026-01-23T01:07:14.010764436Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.173085556s" Jan 23 01:07:14.010838 containerd[1562]: time="2026-01-23T01:07:14.010794256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 01:07:14.011192 containerd[1562]: time="2026-01-23T01:07:14.011171156Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:07:14.451343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827841864.mount: Deactivated successfully. Jan 23 01:07:14.455083 containerd[1562]: time="2026-01-23T01:07:14.455044734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:07:14.455625 containerd[1562]: time="2026-01-23T01:07:14.455603655Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jan 23 01:07:14.456615 containerd[1562]: time="2026-01-23T01:07:14.456576227Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:07:14.457938 containerd[1562]: time="2026-01-23T01:07:14.457904590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:07:14.458872 containerd[1562]: time="2026-01-23T01:07:14.458479611Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 447.282195ms" Jan 23 01:07:14.458872 containerd[1562]: time="2026-01-23T01:07:14.458504441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:07:14.459070 containerd[1562]: time="2026-01-23T01:07:14.459026602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 01:07:14.940394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882842866.mount: Deactivated successfully. Jan 23 01:07:16.593305 containerd[1562]: time="2026-01-23T01:07:16.592178127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:16.593305 containerd[1562]: time="2026-01-23T01:07:16.593276989Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682062" Jan 23 01:07:16.593921 containerd[1562]: time="2026-01-23T01:07:16.593886691Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:16.596004 containerd[1562]: time="2026-01-23T01:07:16.595877485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:16.597655 containerd[1562]: time="2026-01-23T01:07:16.597099257Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.138049585s" Jan 23 01:07:16.597655 containerd[1562]: time="2026-01-23T01:07:16.597123457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 01:07:18.404687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:18.405445 systemd[1]: kubelet.service: Consumed 197ms CPU time, 110.6M memory peak. Jan 23 01:07:18.407818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:18.434420 systemd[1]: Reload requested from client PID 2251 ('systemctl') (unit session-7.scope)... Jan 23 01:07:18.434436 systemd[1]: Reloading... Jan 23 01:07:18.544009 zram_generator::config[2292]: No configuration found. Jan 23 01:07:18.772759 systemd[1]: Reloading finished in 337 ms. Jan 23 01:07:18.837400 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:07:18.837514 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:07:18.837878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:18.837933 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.3M memory peak. Jan 23 01:07:18.839775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:19.010346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:19.017511 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:07:19.051891 kubelet[2350]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:07:19.051891 kubelet[2350]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:07:19.051891 kubelet[2350]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:07:19.051891 kubelet[2350]: I0123 01:07:19.051701 2350 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:07:19.427623 kubelet[2350]: I0123 01:07:19.427206 2350 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:07:19.427623 kubelet[2350]: I0123 01:07:19.427232 2350 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:07:19.428043 kubelet[2350]: I0123 01:07:19.428028 2350 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:07:19.457658 kubelet[2350]: E0123 01:07:19.457631 2350 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.239.48.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.48.230:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:07:19.458475 kubelet[2350]: I0123 01:07:19.458451 2350 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:07:19.466171 kubelet[2350]: I0123 01:07:19.466154 2350 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:07:19.470645 kubelet[2350]: I0123 01:07:19.470613 2350 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:07:19.471987 kubelet[2350]: I0123 01:07:19.471931 2350 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:07:19.472119 kubelet[2350]: I0123 01:07:19.471966 2350 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-48-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:07:19.472410 kubelet[2350]: I0123 01:07:19.472121 2350 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:07:19.472410 kubelet[2350]: I0123 01:07:19.472131 2350 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:07:19.472410 kubelet[2350]: I0123 01:07:19.472237 2350 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:07:19.476837 kubelet[2350]: I0123 01:07:19.476721 2350 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:07:19.476837 kubelet[2350]: I0123 01:07:19.476763 2350 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:07:19.476837 kubelet[2350]: I0123 01:07:19.476783 2350 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:07:19.476837 kubelet[2350]: I0123 01:07:19.476793 2350 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:07:19.481325 kubelet[2350]: W0123 01:07:19.481298 2350 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.239.48.230:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-48-230&limit=500&resourceVersion=0": dial tcp 172.239.48.230:6443: connect: connection refused Jan 23 01:07:19.481584 kubelet[2350]: E0123 01:07:19.481545 2350 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.239.48.230:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-48-230&limit=500&resourceVersion=0\": dial tcp 172.239.48.230:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:07:19.481667 kubelet[2350]: I0123 01:07:19.481635 2350 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:07:19.482387 kubelet[2350]: I0123 01:07:19.481930 2350 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:07:19.482722 kubelet[2350]: W0123 01:07:19.482704 2350 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:07:19.484707 kubelet[2350]: I0123 01:07:19.484685 2350 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:07:19.487023 kubelet[2350]: I0123 01:07:19.484753 2350 server.go:1287] "Started kubelet" Jan 23 01:07:19.487023 kubelet[2350]: W0123 01:07:19.486499 2350 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.239.48.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.239.48.230:6443: connect: connection refused Jan 23 01:07:19.487023 kubelet[2350]: E0123 01:07:19.486530 2350 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.239.48.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.48.230:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:07:19.487670 kubelet[2350]: I0123 01:07:19.487621 2350 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:07:19.488606 kubelet[2350]: I0123 01:07:19.488583 2350 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:07:19.491758 kubelet[2350]: I0123 01:07:19.490787 2350 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:07:19.491758 kubelet[2350]: I0123 01:07:19.491078 2350 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:07:19.492815 kubelet[2350]: I0123 01:07:19.492138 2350 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:07:19.496045 kubelet[2350]: E0123 01:07:19.494928 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.48.230:6443/api/v1/namespaces/default/events\": dial tcp 172.239.48.230:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-48-230.188d36c2eed0fc91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-48-230,UID:172-239-48-230,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-48-230,},FirstTimestamp:2026-01-23 01:07:19.484701841 +0000 UTC m=+0.463879419,LastTimestamp:2026-01-23 01:07:19.484701841 +0000 UTC m=+0.463879419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-48-230,}" Jan 23 01:07:19.496609 kubelet[2350]: I0123 01:07:19.496593 2350 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:07:19.498762 kubelet[2350]: I0123 01:07:19.498743 2350 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:07:19.498904 kubelet[2350]: E0123 01:07:19.498884 2350 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-48-230\" not found" Jan 23 01:07:19.499435 kubelet[2350]: E0123 01:07:19.499399 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.48.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-48-230?timeout=10s\": dial tcp 172.239.48.230:6443: connect: connection refused" interval="200ms" Jan 23 01:07:19.501619 kubelet[2350]: I0123 01:07:19.501595 2350 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:07:19.501667 kubelet[2350]: I0123 01:07:19.501645 2350 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:07:19.501725 kubelet[2350]: I0123 01:07:19.501711 2350 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:07:19.501837 kubelet[2350]: I0123 01:07:19.501821 2350 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:07:19.503284 kubelet[2350]: W0123 01:07:19.503160 2350 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.239.48.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.239.48.230:6443: connect: connection refused Jan 23 01:07:19.503532 kubelet[2350]: E0123 01:07:19.503515 2350 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.239.48.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.48.230:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:07:19.503760 kubelet[2350]: I0123 01:07:19.503748 2350 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:07:19.510991 kubelet[2350]: I0123 01:07:19.510008 2350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:07:19.512803 kubelet[2350]: I0123 01:07:19.511423 2350 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:07:19.512803 kubelet[2350]: I0123 01:07:19.511440 2350 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:07:19.512803 kubelet[2350]: I0123 01:07:19.511455 2350 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:07:19.512803 kubelet[2350]: I0123 01:07:19.511462 2350 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:07:19.512803 kubelet[2350]: E0123 01:07:19.511504 2350 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:07:19.520948 kubelet[2350]: W0123 01:07:19.520915 2350 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.239.48.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.239.48.230:6443: connect: connection refused Jan 23 01:07:19.521021 kubelet[2350]: E0123 01:07:19.520954 2350 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.239.48.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.48.230:6443: connect: connection refused" logger="UnhandledError" Jan 23 01:07:19.533767 kubelet[2350]: E0123 01:07:19.533749 2350 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:07:19.541207 kubelet[2350]: I0123 01:07:19.541190 2350 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:07:19.541207 kubelet[2350]: I0123 01:07:19.541204 2350 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:07:19.541293 kubelet[2350]: I0123 01:07:19.541220 2350 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:07:19.542632 kubelet[2350]: I0123 01:07:19.542617 2350 policy_none.go:49] "None policy: Start" Jan 23 01:07:19.542632 kubelet[2350]: I0123 01:07:19.542634 2350 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:07:19.542632 kubelet[2350]: I0123 01:07:19.542644 2350 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:07:19.548350 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:07:19.559827 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:07:19.562937 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:07:19.575992 kubelet[2350]: I0123 01:07:19.575421 2350 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:07:19.575992 kubelet[2350]: I0123 01:07:19.575631 2350 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:07:19.575992 kubelet[2350]: I0123 01:07:19.575643 2350 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:07:19.576159 kubelet[2350]: I0123 01:07:19.576147 2350 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:07:19.577476 kubelet[2350]: E0123 01:07:19.577462 2350 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:07:19.577580 kubelet[2350]: E0123 01:07:19.577568 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-48-230\" not found" Jan 23 01:07:19.622126 systemd[1]: Created slice kubepods-burstable-podbe61ea118d9832dea6cf03e97ee742c4.slice - libcontainer container kubepods-burstable-podbe61ea118d9832dea6cf03e97ee742c4.slice. Jan 23 01:07:19.635685 kubelet[2350]: E0123 01:07:19.635667 2350 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-48-230\" not found" node="172-239-48-230" Jan 23 01:07:19.638678 systemd[1]: Created slice kubepods-burstable-pod942c076c2453511f1314e58ef66bbc79.slice - libcontainer container kubepods-burstable-pod942c076c2453511f1314e58ef66bbc79.slice. Jan 23 01:07:19.641621 kubelet[2350]: E0123 01:07:19.641602 2350 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-48-230\" not found" node="172-239-48-230" Jan 23 01:07:19.643609 kubelet[2350]: E0123 01:07:19.643533 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.48.230:6443/api/v1/namespaces/default/events\": dial tcp 172.239.48.230:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-48-230.188d36c2eed0fc91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-48-230,UID:172-239-48-230,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-48-230,},FirstTimestamp:2026-01-23 01:07:19.484701841 +0000 UTC m=+0.463879419,LastTimestamp:2026-01-23 01:07:19.484701841 +0000 UTC m=+0.463879419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-48-230,}" Jan 23 01:07:19.644668 systemd[1]: Created slice kubepods-burstable-podd088f3ee6de527987345873f838ac189.slice - libcontainer container kubepods-burstable-podd088f3ee6de527987345873f838ac189.slice. Jan 23 01:07:19.646671 kubelet[2350]: E0123 01:07:19.646649 2350 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-48-230\" not found" node="172-239-48-230" Jan 23 01:07:19.679529 kubelet[2350]: I0123 01:07:19.678304 2350 kubelet_node_status.go:75] "Attempting to register node" node="172-239-48-230" Jan 23 01:07:19.679529 kubelet[2350]: E0123 01:07:19.679251 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.48.230:6443/api/v1/nodes\": dial tcp 172.239.48.230:6443: connect: connection refused" node="172-239-48-230" Jan 23 01:07:19.699857 kubelet[2350]: E0123 01:07:19.699822 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.48.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-48-230?timeout=10s\": dial tcp 172.239.48.230:6443: connect: connection refused" interval="400ms" Jan 23 01:07:19.802481 kubelet[2350]: I0123 01:07:19.802443 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be61ea118d9832dea6cf03e97ee742c4-ca-certs\") pod \"kube-apiserver-172-239-48-230\" (UID: \"be61ea118d9832dea6cf03e97ee742c4\") " pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:19.802481 kubelet[2350]: I0123 01:07:19.802480 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d088f3ee6de527987345873f838ac189-kubeconfig\") pod \"kube-scheduler-172-239-48-230\" (UID: \"d088f3ee6de527987345873f838ac189\") " pod="kube-system/kube-scheduler-172-239-48-230" Jan 23 01:07:19.802481 kubelet[2350]: I0123 01:07:19.802495 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be61ea118d9832dea6cf03e97ee742c4-k8s-certs\") pod \"kube-apiserver-172-239-48-230\" (UID: \"be61ea118d9832dea6cf03e97ee742c4\") " pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:19.802481 kubelet[2350]: I0123 01:07:19.802510 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be61ea118d9832dea6cf03e97ee742c4-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-48-230\" (UID: \"be61ea118d9832dea6cf03e97ee742c4\") " pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:19.802811 kubelet[2350]: I0123 01:07:19.802526 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-ca-certs\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:19.802811 kubelet[2350]: I0123 01:07:19.802550 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-flexvolume-dir\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:19.802811 kubelet[2350]: I0123 01:07:19.802564 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-k8s-certs\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:19.802811 kubelet[2350]: I0123 01:07:19.802577 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-kubeconfig\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:19.802811 kubelet[2350]: I0123 01:07:19.802597 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:19.881295 kubelet[2350]: I0123 01:07:19.881006 2350 kubelet_node_status.go:75] "Attempting to register node" node="172-239-48-230" Jan 23 01:07:19.881295 kubelet[2350]: E0123 01:07:19.881272 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.48.230:6443/api/v1/nodes\": dial tcp 172.239.48.230:6443: connect: connection refused" node="172-239-48-230" Jan 23 01:07:19.937125 kubelet[2350]: E0123 01:07:19.937055 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:19.937749 containerd[1562]: time="2026-01-23T01:07:19.937689907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-48-230,Uid:be61ea118d9832dea6cf03e97ee742c4,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:19.943309 kubelet[2350]: E0123 01:07:19.942995 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:19.943464 containerd[1562]: time="2026-01-23T01:07:19.943421908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-48-230,Uid:942c076c2453511f1314e58ef66bbc79,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:19.948339 kubelet[2350]: E0123 01:07:19.948184 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:19.962074 containerd[1562]: time="2026-01-23T01:07:19.961962505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-48-230,Uid:d088f3ee6de527987345873f838ac189,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:19.966113 containerd[1562]: time="2026-01-23T01:07:19.966084574Z" level=info msg="connecting to shim 837c2b4a4bb1cbca32acfafe6502bc8ae22d2d34611645743b9dfbc83b75a715" address="unix:///run/containerd/s/3bf93d74320a67aed6f2d713989549a918a77eb87b297d6807b960ba9938cafa" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:19.980992 containerd[1562]: time="2026-01-23T01:07:19.980799793Z" level=info msg="connecting to shim e12643f9e200e183b49cdde2b142b10cf79d86369282e80231afad2a18d27de2" address="unix:///run/containerd/s/181dfe2abd99c62b22008451598c8f92798b1ea10289709e0c19b049cb92a92f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:19.996083 containerd[1562]: time="2026-01-23T01:07:19.996053823Z" level=info msg="connecting to shim b8433f7ee534ddfe8f6d6614c10135ef4fdcc96dc7ce34775dcb9021eff638ef" address="unix:///run/containerd/s/61400a57e25d005ebbcce08a6ed99cef67cb0a02144d2ce77197a02e7db34210" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:20.013100 systemd[1]: Started cri-containerd-837c2b4a4bb1cbca32acfafe6502bc8ae22d2d34611645743b9dfbc83b75a715.scope - libcontainer container 837c2b4a4bb1cbca32acfafe6502bc8ae22d2d34611645743b9dfbc83b75a715. Jan 23 01:07:20.026211 systemd[1]: Started cri-containerd-e12643f9e200e183b49cdde2b142b10cf79d86369282e80231afad2a18d27de2.scope - libcontainer container e12643f9e200e183b49cdde2b142b10cf79d86369282e80231afad2a18d27de2. Jan 23 01:07:20.036116 systemd[1]: Started cri-containerd-b8433f7ee534ddfe8f6d6614c10135ef4fdcc96dc7ce34775dcb9021eff638ef.scope - libcontainer container b8433f7ee534ddfe8f6d6614c10135ef4fdcc96dc7ce34775dcb9021eff638ef. Jan 23 01:07:20.083544 containerd[1562]: time="2026-01-23T01:07:20.083495538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-48-230,Uid:be61ea118d9832dea6cf03e97ee742c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"837c2b4a4bb1cbca32acfafe6502bc8ae22d2d34611645743b9dfbc83b75a715\"" Jan 23 01:07:20.085046 kubelet[2350]: E0123 01:07:20.085006 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:20.089311 containerd[1562]: time="2026-01-23T01:07:20.089286260Z" level=info msg="CreateContainer within sandbox \"837c2b4a4bb1cbca32acfafe6502bc8ae22d2d34611645743b9dfbc83b75a715\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:07:20.100390 containerd[1562]: time="2026-01-23T01:07:20.100369312Z" level=info msg="Container b0e0539c5e85e03488fb613a0730d0b6f9f244ca1b869a74926261947e35d5c0: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:20.101134 kubelet[2350]: E0123 01:07:20.101073 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.48.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-48-230?timeout=10s\": dial tcp 172.239.48.230:6443: connect: connection refused" interval="800ms" Jan 23 01:07:20.108761 containerd[1562]: time="2026-01-23T01:07:20.108568078Z" level=info msg="CreateContainer within sandbox \"837c2b4a4bb1cbca32acfafe6502bc8ae22d2d34611645743b9dfbc83b75a715\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0e0539c5e85e03488fb613a0730d0b6f9f244ca1b869a74926261947e35d5c0\"" Jan 23 01:07:20.110139 containerd[1562]: time="2026-01-23T01:07:20.110109742Z" level=info msg="StartContainer for \"b0e0539c5e85e03488fb613a0730d0b6f9f244ca1b869a74926261947e35d5c0\"" Jan 23 01:07:20.112410 containerd[1562]: time="2026-01-23T01:07:20.112390956Z" level=info msg="connecting to shim b0e0539c5e85e03488fb613a0730d0b6f9f244ca1b869a74926261947e35d5c0" address="unix:///run/containerd/s/3bf93d74320a67aed6f2d713989549a918a77eb87b297d6807b960ba9938cafa" protocol=ttrpc version=3 Jan 23 01:07:20.122415 containerd[1562]: time="2026-01-23T01:07:20.122393906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-48-230,Uid:d088f3ee6de527987345873f838ac189,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8433f7ee534ddfe8f6d6614c10135ef4fdcc96dc7ce34775dcb9021eff638ef\"" Jan 23 01:07:20.123147 kubelet[2350]: E0123 01:07:20.123102 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:20.124850 containerd[1562]: time="2026-01-23T01:07:20.124832581Z" level=info msg="CreateContainer within sandbox \"b8433f7ee534ddfe8f6d6614c10135ef4fdcc96dc7ce34775dcb9021eff638ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:07:20.129748 containerd[1562]: time="2026-01-23T01:07:20.129729431Z" level=info msg="Container 47d5272765c1ab93f78b69d3afc53b84cc20f2acfab6f4ffbf33f94a1afa7258: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:20.133646 containerd[1562]: time="2026-01-23T01:07:20.133626439Z" level=info msg="CreateContainer within sandbox \"b8433f7ee534ddfe8f6d6614c10135ef4fdcc96dc7ce34775dcb9021eff638ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47d5272765c1ab93f78b69d3afc53b84cc20f2acfab6f4ffbf33f94a1afa7258\"" Jan 23 01:07:20.134085 containerd[1562]: time="2026-01-23T01:07:20.134068899Z" level=info msg="StartContainer for \"47d5272765c1ab93f78b69d3afc53b84cc20f2acfab6f4ffbf33f94a1afa7258\"" Jan 23 01:07:20.135077 containerd[1562]: time="2026-01-23T01:07:20.135056461Z" level=info msg="connecting to shim 47d5272765c1ab93f78b69d3afc53b84cc20f2acfab6f4ffbf33f94a1afa7258" address="unix:///run/containerd/s/61400a57e25d005ebbcce08a6ed99cef67cb0a02144d2ce77197a02e7db34210" protocol=ttrpc version=3 Jan 23 01:07:20.139573 containerd[1562]: time="2026-01-23T01:07:20.139538830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-48-230,Uid:942c076c2453511f1314e58ef66bbc79,Namespace:kube-system,Attempt:0,} returns sandbox id \"e12643f9e200e183b49cdde2b142b10cf79d86369282e80231afad2a18d27de2\"" Jan 23 01:07:20.140391 kubelet[2350]: E0123 01:07:20.140303 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:20.143409 containerd[1562]: time="2026-01-23T01:07:20.143381518Z" level=info msg="CreateContainer within sandbox \"e12643f9e200e183b49cdde2b142b10cf79d86369282e80231afad2a18d27de2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:07:20.146250 systemd[1]: Started cri-containerd-b0e0539c5e85e03488fb613a0730d0b6f9f244ca1b869a74926261947e35d5c0.scope - libcontainer container b0e0539c5e85e03488fb613a0730d0b6f9f244ca1b869a74926261947e35d5c0. Jan 23 01:07:20.149513 containerd[1562]: time="2026-01-23T01:07:20.149466080Z" level=info msg="Container 9985fe45a63b9d04f8a1d90008a5be680a2d38655ce3883e0a4dc3c075cda661: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:20.154342 containerd[1562]: time="2026-01-23T01:07:20.154301690Z" level=info msg="CreateContainer within sandbox \"e12643f9e200e183b49cdde2b142b10cf79d86369282e80231afad2a18d27de2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9985fe45a63b9d04f8a1d90008a5be680a2d38655ce3883e0a4dc3c075cda661\"" Jan 23 01:07:20.155018 containerd[1562]: time="2026-01-23T01:07:20.154998321Z" level=info msg="StartContainer for \"9985fe45a63b9d04f8a1d90008a5be680a2d38655ce3883e0a4dc3c075cda661\"" Jan 23 01:07:20.155820 containerd[1562]: time="2026-01-23T01:07:20.155801503Z" level=info msg="connecting to shim 9985fe45a63b9d04f8a1d90008a5be680a2d38655ce3883e0a4dc3c075cda661" address="unix:///run/containerd/s/181dfe2abd99c62b22008451598c8f92798b1ea10289709e0c19b049cb92a92f" protocol=ttrpc version=3 Jan 23 01:07:20.172112 systemd[1]: Started cri-containerd-47d5272765c1ab93f78b69d3afc53b84cc20f2acfab6f4ffbf33f94a1afa7258.scope - libcontainer container 47d5272765c1ab93f78b69d3afc53b84cc20f2acfab6f4ffbf33f94a1afa7258. Jan 23 01:07:20.182198 systemd[1]: Started cri-containerd-9985fe45a63b9d04f8a1d90008a5be680a2d38655ce3883e0a4dc3c075cda661.scope - libcontainer container 9985fe45a63b9d04f8a1d90008a5be680a2d38655ce3883e0a4dc3c075cda661. Jan 23 01:07:20.250157 containerd[1562]: time="2026-01-23T01:07:20.250066011Z" level=info msg="StartContainer for \"b0e0539c5e85e03488fb613a0730d0b6f9f244ca1b869a74926261947e35d5c0\" returns successfully" Jan 23 01:07:20.270406 containerd[1562]: time="2026-01-23T01:07:20.270357372Z" level=info msg="StartContainer for \"9985fe45a63b9d04f8a1d90008a5be680a2d38655ce3883e0a4dc3c075cda661\" returns successfully" Jan 23 01:07:20.280206 containerd[1562]: time="2026-01-23T01:07:20.280183002Z" level=info msg="StartContainer for \"47d5272765c1ab93f78b69d3afc53b84cc20f2acfab6f4ffbf33f94a1afa7258\" returns successfully" Jan 23 01:07:20.284144 kubelet[2350]: I0123 01:07:20.284119 2350 kubelet_node_status.go:75] "Attempting to register node" node="172-239-48-230" Jan 23 01:07:20.284402 kubelet[2350]: E0123 01:07:20.284375 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.48.230:6443/api/v1/nodes\": dial tcp 172.239.48.230:6443: connect: connection refused" node="172-239-48-230" Jan 23 01:07:20.542047 kubelet[2350]: E0123 01:07:20.541838 2350 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-48-230\" not found" node="172-239-48-230" Jan 23 01:07:20.542047 kubelet[2350]: E0123 01:07:20.541936 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:20.547402 kubelet[2350]: E0123 01:07:20.547354 2350 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-48-230\" not found" node="172-239-48-230" Jan 23 01:07:20.548386 kubelet[2350]: E0123 01:07:20.548248 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:20.551005 kubelet[2350]: E0123 01:07:20.549546 2350 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-48-230\" not found" node="172-239-48-230" Jan 23 01:07:20.551266 kubelet[2350]: E0123 01:07:20.551254 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:21.088777 kubelet[2350]: I0123 01:07:21.088254 2350 kubelet_node_status.go:75] "Attempting to register node" node="172-239-48-230" Jan 23 01:07:21.553746 kubelet[2350]: E0123 01:07:21.553125 2350 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-48-230\" not found" node="172-239-48-230" Jan 23 01:07:21.553746 kubelet[2350]: E0123 01:07:21.553390 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:21.553746 kubelet[2350]: E0123 01:07:21.553583 2350 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-48-230\" not found" node="172-239-48-230" Jan 23 01:07:21.553746 kubelet[2350]: E0123 01:07:21.553649 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:21.754687 kubelet[2350]: E0123 01:07:21.754625 2350 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-48-230\" not found" node="172-239-48-230" Jan 23 01:07:21.911470 kubelet[2350]: I0123 01:07:21.910523 2350 kubelet_node_status.go:78] "Successfully registered node" node="172-239-48-230" Jan 23 01:07:21.999670 kubelet[2350]: I0123 01:07:21.999639 2350 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:22.003774 kubelet[2350]: E0123 01:07:22.003756 2350 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-48-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:22.003774 kubelet[2350]: I0123 01:07:22.003773 2350 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-48-230" Jan 23 01:07:22.004819 kubelet[2350]: E0123 01:07:22.004690 2350 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-48-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-48-230" Jan 23 01:07:22.004819 kubelet[2350]: I0123 01:07:22.004706 2350 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:22.005898 kubelet[2350]: E0123 01:07:22.005874 2350 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-48-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:22.487287 kubelet[2350]: I0123 01:07:22.487260 2350 apiserver.go:52] "Watching apiserver" Jan 23 01:07:22.501875 kubelet[2350]: I0123 01:07:22.501858 2350 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:07:22.550896 kubelet[2350]: I0123 01:07:22.550870 2350 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-48-230" Jan 23 01:07:22.552023 kubelet[2350]: E0123 01:07:22.551963 2350 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-48-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-48-230" Jan 23 01:07:22.552102 kubelet[2350]: E0123 01:07:22.552088 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:22.705753 kubelet[2350]: I0123 01:07:22.705713 2350 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:22.709930 kubelet[2350]: E0123 01:07:22.709901 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:23.553190 kubelet[2350]: E0123 01:07:23.553154 2350 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:23.967145 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-7.scope)... Jan 23 01:07:23.967164 systemd[1]: Reloading... Jan 23 01:07:24.051020 zram_generator::config[2668]: No configuration found. Jan 23 01:07:24.277283 systemd[1]: Reloading finished in 309 ms. Jan 23 01:07:24.303453 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:24.322118 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:07:24.322400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:24.322462 systemd[1]: kubelet.service: Consumed 839ms CPU time, 133.2M memory peak. Jan 23 01:07:24.324809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:07:24.513041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:07:24.522832 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:07:24.568847 kubelet[2713]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:07:24.568847 kubelet[2713]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:07:24.568847 kubelet[2713]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:07:24.568847 kubelet[2713]: I0123 01:07:24.567722 2713 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:07:24.575969 kubelet[2713]: I0123 01:07:24.575948 2713 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 01:07:24.575969 kubelet[2713]: I0123 01:07:24.575965 2713 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:07:24.576202 kubelet[2713]: I0123 01:07:24.576176 2713 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 01:07:24.577192 kubelet[2713]: I0123 01:07:24.577176 2713 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 01:07:24.579148 kubelet[2713]: I0123 01:07:24.578864 2713 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:07:24.582929 kubelet[2713]: I0123 01:07:24.582898 2713 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:07:24.586302 kubelet[2713]: I0123 01:07:24.586263 2713 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:07:24.586490 kubelet[2713]: I0123 01:07:24.586452 2713 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:07:24.586630 kubelet[2713]: I0123 01:07:24.586480 2713 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-48-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:07:24.586713 kubelet[2713]: I0123 01:07:24.586635 2713 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:07:24.586713 kubelet[2713]: I0123 01:07:24.586645 2713 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 01:07:24.586713 kubelet[2713]: I0123 01:07:24.586687 2713 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:07:24.587419 kubelet[2713]: I0123 01:07:24.586813 2713 kubelet.go:446] "Attempting to sync node with API server" Jan 23 01:07:24.587419 kubelet[2713]: I0123 01:07:24.586839 2713 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:07:24.587419 kubelet[2713]: I0123 01:07:24.586858 2713 kubelet.go:352] "Adding apiserver pod source" Jan 23 01:07:24.587419 kubelet[2713]: I0123 01:07:24.586867 2713 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:07:24.588021 kubelet[2713]: I0123 01:07:24.588003 2713 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:07:24.589696 kubelet[2713]: I0123 01:07:24.588956 2713 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 01:07:24.589696 kubelet[2713]: I0123 01:07:24.589508 2713 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:07:24.589696 kubelet[2713]: I0123 01:07:24.589528 2713 server.go:1287] "Started kubelet" Jan 23 01:07:24.601347 kubelet[2713]: I0123 01:07:24.601238 2713 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:07:24.602338 kubelet[2713]: I0123 01:07:24.602300 2713 server.go:479] "Adding debug handlers to kubelet server" Jan 23 01:07:24.603645 kubelet[2713]: I0123 01:07:24.603605 2713 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:07:24.604277 kubelet[2713]: I0123 01:07:24.604265 2713 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:07:24.605242 kubelet[2713]: I0123 01:07:24.605221 2713 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:07:24.605318 kubelet[2713]: E0123 01:07:24.605305 2713 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:07:24.606031 kubelet[2713]: I0123 01:07:24.605931 2713 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:07:24.609373 kubelet[2713]: I0123 01:07:24.609350 2713 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:07:24.612967 kubelet[2713]: I0123 01:07:24.612938 2713 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:07:24.613131 kubelet[2713]: I0123 01:07:24.613077 2713 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:07:24.614270 kubelet[2713]: I0123 01:07:24.614195 2713 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:07:24.616118 kubelet[2713]: I0123 01:07:24.616095 2713 factory.go:221] Registration of the containerd container factory successfully Jan 23 01:07:24.616118 kubelet[2713]: I0123 01:07:24.616114 2713 factory.go:221] Registration of the systemd container factory successfully Jan 23 01:07:24.619688 kubelet[2713]: I0123 01:07:24.619580 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 01:07:24.620784 kubelet[2713]: I0123 01:07:24.620769 2713 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 01:07:24.620860 kubelet[2713]: I0123 01:07:24.620850 2713 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 01:07:24.620922 kubelet[2713]: I0123 01:07:24.620910 2713 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:07:24.621010 kubelet[2713]: I0123 01:07:24.620955 2713 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 01:07:24.621096 kubelet[2713]: E0123 01:07:24.621079 2713 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:07:24.672106 kubelet[2713]: I0123 01:07:24.671991 2713 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:07:24.672106 kubelet[2713]: I0123 01:07:24.672076 2713 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:07:24.673182 kubelet[2713]: I0123 01:07:24.672230 2713 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:07:24.673182 kubelet[2713]: I0123 01:07:24.672371 2713 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:07:24.673182 kubelet[2713]: I0123 01:07:24.672380 2713 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:07:24.673182 kubelet[2713]: I0123 01:07:24.672396 2713 policy_none.go:49] "None policy: Start" Jan 23 01:07:24.673182 kubelet[2713]: I0123 01:07:24.672405 2713 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:07:24.673182 kubelet[2713]: I0123 01:07:24.672415 2713 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:07:24.673182 kubelet[2713]: I0123 01:07:24.672500 2713 state_mem.go:75] "Updated machine memory state" Jan 23 01:07:24.677411 kubelet[2713]: I0123 01:07:24.677396 2713 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 01:07:24.678227 kubelet[2713]: I0123 01:07:24.678193 2713 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:07:24.678264 kubelet[2713]: I0123 01:07:24.678216 2713 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:07:24.678702 kubelet[2713]: I0123 01:07:24.678681 2713 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:07:24.681917 kubelet[2713]: E0123 01:07:24.681889 2713 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:07:24.722160 kubelet[2713]: I0123 01:07:24.721723 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:24.722160 kubelet[2713]: I0123 01:07:24.721761 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-48-230" Jan 23 01:07:24.722160 kubelet[2713]: I0123 01:07:24.721994 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:24.728031 kubelet[2713]: E0123 01:07:24.727969 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-48-230\" already exists" pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:24.783624 kubelet[2713]: I0123 01:07:24.783586 2713 kubelet_node_status.go:75] "Attempting to register node" node="172-239-48-230" Jan 23 01:07:24.788532 kubelet[2713]: I0123 01:07:24.788517 2713 kubelet_node_status.go:124] "Node was previously registered" node="172-239-48-230" Jan 23 01:07:24.788838 kubelet[2713]: I0123 01:07:24.788827 2713 kubelet_node_status.go:78] "Successfully registered node" node="172-239-48-230" Jan 23 01:07:24.915759 kubelet[2713]: I0123 01:07:24.914829 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-kubeconfig\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:24.915759 kubelet[2713]: I0123 01:07:24.914894 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d088f3ee6de527987345873f838ac189-kubeconfig\") pod \"kube-scheduler-172-239-48-230\" (UID: \"d088f3ee6de527987345873f838ac189\") " pod="kube-system/kube-scheduler-172-239-48-230" Jan 23 01:07:24.915759 kubelet[2713]: I0123 01:07:24.914939 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be61ea118d9832dea6cf03e97ee742c4-k8s-certs\") pod \"kube-apiserver-172-239-48-230\" (UID: \"be61ea118d9832dea6cf03e97ee742c4\") " pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:24.915759 kubelet[2713]: I0123 01:07:24.914957 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-ca-certs\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:24.915759 kubelet[2713]: I0123 01:07:24.915001 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-flexvolume-dir\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:24.915946 kubelet[2713]: I0123 01:07:24.915030 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-k8s-certs\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:24.915946 kubelet[2713]: I0123 01:07:24.915045 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be61ea118d9832dea6cf03e97ee742c4-ca-certs\") pod \"kube-apiserver-172-239-48-230\" (UID: \"be61ea118d9832dea6cf03e97ee742c4\") " pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:24.915946 kubelet[2713]: I0123 01:07:24.915080 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be61ea118d9832dea6cf03e97ee742c4-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-48-230\" (UID: \"be61ea118d9832dea6cf03e97ee742c4\") " pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:24.915946 kubelet[2713]: I0123 01:07:24.915095 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/942c076c2453511f1314e58ef66bbc79-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-48-230\" (UID: \"942c076c2453511f1314e58ef66bbc79\") " pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:25.028059 kubelet[2713]: E0123 01:07:25.027353 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:25.028059 kubelet[2713]: E0123 01:07:25.027660 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:25.028776 kubelet[2713]: E0123 01:07:25.028354 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:25.590992 kubelet[2713]: I0123 01:07:25.590936 2713 apiserver.go:52] "Watching apiserver" Jan 23 01:07:25.613770 kubelet[2713]: I0123 01:07:25.613732 2713 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:07:25.650170 kubelet[2713]: I0123 01:07:25.650093 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:25.650170 kubelet[2713]: E0123 01:07:25.650107 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:25.650323 kubelet[2713]: I0123 01:07:25.650296 2713 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:25.656846 kubelet[2713]: E0123 01:07:25.656815 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-48-230\" already exists" pod="kube-system/kube-apiserver-172-239-48-230" Jan 23 01:07:25.657169 kubelet[2713]: E0123 01:07:25.657110 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:25.662524 kubelet[2713]: E0123 01:07:25.662504 2713 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-48-230\" already exists" pod="kube-system/kube-controller-manager-172-239-48-230" Jan 23 01:07:25.662699 kubelet[2713]: E0123 01:07:25.662607 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:25.697232 kubelet[2713]: I0123 01:07:25.697082 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-48-230" podStartSLOduration=1.697069213 podStartE2EDuration="1.697069213s" podCreationTimestamp="2026-01-23 01:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:25.691475882 +0000 UTC m=+1.162360835" watchObservedRunningTime="2026-01-23 01:07:25.697069213 +0000 UTC m=+1.167954166" Jan 23 01:07:25.704491 kubelet[2713]: I0123 01:07:25.704456 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-48-230" podStartSLOduration=1.704446558 podStartE2EDuration="1.704446558s" podCreationTimestamp="2026-01-23 01:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:25.697356493 +0000 UTC m=+1.168241446" watchObservedRunningTime="2026-01-23 01:07:25.704446558 +0000 UTC m=+1.175331511" Jan 23 01:07:25.714344 kubelet[2713]: I0123 01:07:25.714298 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-48-230" podStartSLOduration=3.714288207 podStartE2EDuration="3.714288207s" podCreationTimestamp="2026-01-23 01:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:25.704755438 +0000 UTC m=+1.175640391" watchObservedRunningTime="2026-01-23 01:07:25.714288207 +0000 UTC m=+1.185173160" Jan 23 01:07:26.651751 kubelet[2713]: E0123 01:07:26.651260 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:26.651751 kubelet[2713]: E0123 01:07:26.651350 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:26.652690 kubelet[2713]: E0123 01:07:26.652599 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:30.057228 kubelet[2713]: I0123 01:07:30.057175 2713 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:07:30.057863 kubelet[2713]: I0123 01:07:30.057777 2713 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:07:30.057908 containerd[1562]: time="2026-01-23T01:07:30.057646262Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:07:30.781268 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 01:07:30.933640 systemd[1]: Created slice kubepods-besteffort-pod1bf613ef_86d7_471e_9443_a7f61fbe7952.slice - libcontainer container kubepods-besteffort-pod1bf613ef_86d7_471e_9443_a7f61fbe7952.slice. Jan 23 01:07:30.949041 kubelet[2713]: I0123 01:07:30.949013 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1bf613ef-86d7-471e-9443-a7f61fbe7952-kube-proxy\") pod \"kube-proxy-7vqdw\" (UID: \"1bf613ef-86d7-471e-9443-a7f61fbe7952\") " pod="kube-system/kube-proxy-7vqdw" Jan 23 01:07:30.949041 kubelet[2713]: I0123 01:07:30.949042 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bf613ef-86d7-471e-9443-a7f61fbe7952-lib-modules\") pod \"kube-proxy-7vqdw\" (UID: \"1bf613ef-86d7-471e-9443-a7f61fbe7952\") " pod="kube-system/kube-proxy-7vqdw" Jan 23 01:07:30.949167 kubelet[2713]: I0123 01:07:30.949063 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bf613ef-86d7-471e-9443-a7f61fbe7952-xtables-lock\") pod \"kube-proxy-7vqdw\" (UID: \"1bf613ef-86d7-471e-9443-a7f61fbe7952\") " pod="kube-system/kube-proxy-7vqdw" Jan 23 01:07:30.949167 kubelet[2713]: I0123 01:07:30.949077 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdl7k\" (UniqueName: \"kubernetes.io/projected/1bf613ef-86d7-471e-9443-a7f61fbe7952-kube-api-access-fdl7k\") pod \"kube-proxy-7vqdw\" (UID: \"1bf613ef-86d7-471e-9443-a7f61fbe7952\") " pod="kube-system/kube-proxy-7vqdw" Jan 23 01:07:31.143433 systemd[1]: Created slice kubepods-besteffort-pod157830d7_ac2f_49c4_a5f7_cd05433972bd.slice - libcontainer container kubepods-besteffort-pod157830d7_ac2f_49c4_a5f7_cd05433972bd.slice. Jan 23 01:07:31.150408 kubelet[2713]: I0123 01:07:31.150217 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hjlt\" (UniqueName: \"kubernetes.io/projected/157830d7-ac2f-49c4-a5f7-cd05433972bd-kube-api-access-8hjlt\") pod \"tigera-operator-7dcd859c48-m5zks\" (UID: \"157830d7-ac2f-49c4-a5f7-cd05433972bd\") " pod="tigera-operator/tigera-operator-7dcd859c48-m5zks" Jan 23 01:07:31.150408 kubelet[2713]: I0123 01:07:31.150277 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/157830d7-ac2f-49c4-a5f7-cd05433972bd-var-lib-calico\") pod \"tigera-operator-7dcd859c48-m5zks\" (UID: \"157830d7-ac2f-49c4-a5f7-cd05433972bd\") " pod="tigera-operator/tigera-operator-7dcd859c48-m5zks" Jan 23 01:07:31.243361 kubelet[2713]: E0123 01:07:31.243330 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:31.244220 containerd[1562]: time="2026-01-23T01:07:31.244186625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7vqdw,Uid:1bf613ef-86d7-471e-9443-a7f61fbe7952,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:31.272462 containerd[1562]: time="2026-01-23T01:07:31.272303941Z" level=info msg="connecting to shim e5035457644fb17643245325a795f1ffb1e48febfca11e3c51eed9b96513e3f8" address="unix:///run/containerd/s/565cd73124a83bc970d4ce6ab095e3b8208ad18c5957d353ccbdf7dbd94f69ba" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:31.299122 systemd[1]: Started cri-containerd-e5035457644fb17643245325a795f1ffb1e48febfca11e3c51eed9b96513e3f8.scope - libcontainer container e5035457644fb17643245325a795f1ffb1e48febfca11e3c51eed9b96513e3f8. Jan 23 01:07:31.331345 containerd[1562]: time="2026-01-23T01:07:31.331284429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7vqdw,Uid:1bf613ef-86d7-471e-9443-a7f61fbe7952,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5035457644fb17643245325a795f1ffb1e48febfca11e3c51eed9b96513e3f8\"" Jan 23 01:07:31.332219 kubelet[2713]: E0123 01:07:31.332199 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:31.334626 containerd[1562]: time="2026-01-23T01:07:31.334603255Z" level=info msg="CreateContainer within sandbox \"e5035457644fb17643245325a795f1ffb1e48febfca11e3c51eed9b96513e3f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:07:31.343775 containerd[1562]: time="2026-01-23T01:07:31.343750624Z" level=info msg="Container 9cc7e41c2fd81613ff6a959713faf31825a974f75b68f4ee0daaa10f8e5bed69: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:31.349381 containerd[1562]: time="2026-01-23T01:07:31.349346845Z" level=info msg="CreateContainer within sandbox \"e5035457644fb17643245325a795f1ffb1e48febfca11e3c51eed9b96513e3f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9cc7e41c2fd81613ff6a959713faf31825a974f75b68f4ee0daaa10f8e5bed69\"" Jan 23 01:07:31.350128 containerd[1562]: time="2026-01-23T01:07:31.350080706Z" level=info msg="StartContainer for \"9cc7e41c2fd81613ff6a959713faf31825a974f75b68f4ee0daaa10f8e5bed69\"" Jan 23 01:07:31.351841 containerd[1562]: time="2026-01-23T01:07:31.351812270Z" level=info msg="connecting to shim 9cc7e41c2fd81613ff6a959713faf31825a974f75b68f4ee0daaa10f8e5bed69" address="unix:///run/containerd/s/565cd73124a83bc970d4ce6ab095e3b8208ad18c5957d353ccbdf7dbd94f69ba" protocol=ttrpc version=3 Jan 23 01:07:31.372104 systemd[1]: Started cri-containerd-9cc7e41c2fd81613ff6a959713faf31825a974f75b68f4ee0daaa10f8e5bed69.scope - libcontainer container 9cc7e41c2fd81613ff6a959713faf31825a974f75b68f4ee0daaa10f8e5bed69. Jan 23 01:07:31.447345 containerd[1562]: time="2026-01-23T01:07:31.447224241Z" level=info msg="StartContainer for \"9cc7e41c2fd81613ff6a959713faf31825a974f75b68f4ee0daaa10f8e5bed69\" returns successfully" Jan 23 01:07:31.454944 containerd[1562]: time="2026-01-23T01:07:31.454916696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m5zks,Uid:157830d7-ac2f-49c4-a5f7-cd05433972bd,Namespace:tigera-operator,Attempt:0,}" Jan 23 01:07:31.472026 containerd[1562]: time="2026-01-23T01:07:31.471400859Z" level=info msg="connecting to shim b3ea30e02c09a4760608c2b988cf008007ec1fdf9153edcbb0960fceff712ba5" address="unix:///run/containerd/s/11780320a246f1c52a17bbf6bf07299214f4b9f8f890be6363f8cb5d7b3cdb06" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:31.492629 kubelet[2713]: E0123 01:07:31.492589 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:31.506233 systemd[1]: Started cri-containerd-b3ea30e02c09a4760608c2b988cf008007ec1fdf9153edcbb0960fceff712ba5.scope - libcontainer container b3ea30e02c09a4760608c2b988cf008007ec1fdf9153edcbb0960fceff712ba5. Jan 23 01:07:31.562290 containerd[1562]: time="2026-01-23T01:07:31.562222221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m5zks,Uid:157830d7-ac2f-49c4-a5f7-cd05433972bd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b3ea30e02c09a4760608c2b988cf008007ec1fdf9153edcbb0960fceff712ba5\"" Jan 23 01:07:31.565409 containerd[1562]: time="2026-01-23T01:07:31.565322827Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 01:07:31.662604 kubelet[2713]: E0123 01:07:31.662565 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:31.663083 kubelet[2713]: E0123 01:07:31.663060 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:31.682816 kubelet[2713]: I0123 01:07:31.682779 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7vqdw" podStartSLOduration=1.682766212 podStartE2EDuration="1.682766212s" podCreationTimestamp="2026-01-23 01:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:07:31.674811376 +0000 UTC m=+7.145696329" watchObservedRunningTime="2026-01-23 01:07:31.682766212 +0000 UTC m=+7.153651165" Jan 23 01:07:32.065391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933670293.mount: Deactivated successfully. Jan 23 01:07:32.369107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114543328.mount: Deactivated successfully. Jan 23 01:07:33.742012 kubelet[2713]: E0123 01:07:33.741904 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:33.888570 containerd[1562]: time="2026-01-23T01:07:33.888304610Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:33.889838 containerd[1562]: time="2026-01-23T01:07:33.889341661Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 01:07:33.890401 containerd[1562]: time="2026-01-23T01:07:33.890367173Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:33.893706 containerd[1562]: time="2026-01-23T01:07:33.893671070Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:33.894390 containerd[1562]: time="2026-01-23T01:07:33.894342242Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.328991186s" Jan 23 01:07:33.894464 containerd[1562]: time="2026-01-23T01:07:33.894449469Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 01:07:33.898501 containerd[1562]: time="2026-01-23T01:07:33.898474898Z" level=info msg="CreateContainer within sandbox \"b3ea30e02c09a4760608c2b988cf008007ec1fdf9153edcbb0960fceff712ba5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 01:07:33.906632 containerd[1562]: time="2026-01-23T01:07:33.906609612Z" level=info msg="Container d7e1a8128b23cf4cbce352c5b7c129b629ed4a3e4003577fd52569d46fabb86b: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:33.911691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146264045.mount: Deactivated successfully. Jan 23 01:07:33.913659 containerd[1562]: time="2026-01-23T01:07:33.913606088Z" level=info msg="CreateContainer within sandbox \"b3ea30e02c09a4760608c2b988cf008007ec1fdf9153edcbb0960fceff712ba5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d7e1a8128b23cf4cbce352c5b7c129b629ed4a3e4003577fd52569d46fabb86b\"" Jan 23 01:07:33.914763 containerd[1562]: time="2026-01-23T01:07:33.914696237Z" level=info msg="StartContainer for \"d7e1a8128b23cf4cbce352c5b7c129b629ed4a3e4003577fd52569d46fabb86b\"" Jan 23 01:07:33.915854 containerd[1562]: time="2026-01-23T01:07:33.915813647Z" level=info msg="connecting to shim d7e1a8128b23cf4cbce352c5b7c129b629ed4a3e4003577fd52569d46fabb86b" address="unix:///run/containerd/s/11780320a246f1c52a17bbf6bf07299214f4b9f8f890be6363f8cb5d7b3cdb06" protocol=ttrpc version=3 Jan 23 01:07:33.942136 systemd[1]: Started cri-containerd-d7e1a8128b23cf4cbce352c5b7c129b629ed4a3e4003577fd52569d46fabb86b.scope - libcontainer container d7e1a8128b23cf4cbce352c5b7c129b629ed4a3e4003577fd52569d46fabb86b. Jan 23 01:07:33.982204 containerd[1562]: time="2026-01-23T01:07:33.982154727Z" level=info msg="StartContainer for \"d7e1a8128b23cf4cbce352c5b7c129b629ed4a3e4003577fd52569d46fabb86b\" returns successfully" Jan 23 01:07:34.669489 kubelet[2713]: E0123 01:07:34.669428 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:35.182726 kubelet[2713]: E0123 01:07:35.182668 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:35.198007 kubelet[2713]: I0123 01:07:35.197801 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-m5zks" podStartSLOduration=1.866354795 podStartE2EDuration="4.197788454s" podCreationTimestamp="2026-01-23 01:07:31 +0000 UTC" firstStartedPulling="2026-01-23 01:07:31.564288375 +0000 UTC m=+7.035173328" lastFinishedPulling="2026-01-23 01:07:33.895722034 +0000 UTC m=+9.366606987" observedRunningTime="2026-01-23 01:07:34.67840726 +0000 UTC m=+10.149292213" watchObservedRunningTime="2026-01-23 01:07:35.197788454 +0000 UTC m=+10.668673407" Jan 23 01:07:35.673758 kubelet[2713]: E0123 01:07:35.673724 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:39.486817 sudo[1797]: pam_unix(sudo:session): session closed for user root Jan 23 01:07:39.509086 sshd[1796]: Connection closed by 68.220.241.50 port 60188 Jan 23 01:07:39.512347 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Jan 23 01:07:39.519348 systemd[1]: sshd@6-172.239.48.230:22-68.220.241.50:60188.service: Deactivated successfully. Jan 23 01:07:39.523470 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:07:39.524010 systemd[1]: session-7.scope: Consumed 3.692s CPU time, 229M memory peak. Jan 23 01:07:39.526032 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:07:39.529121 systemd-logind[1531]: Removed session 7. Jan 23 01:07:44.091878 systemd[1]: Created slice kubepods-besteffort-poda6afcdc1_13a0_4020_9061_3b3adfd38e8b.slice - libcontainer container kubepods-besteffort-poda6afcdc1_13a0_4020_9061_3b3adfd38e8b.slice. Jan 23 01:07:44.143398 kubelet[2713]: I0123 01:07:44.143277 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6afcdc1-13a0-4020-9061-3b3adfd38e8b-tigera-ca-bundle\") pod \"calico-typha-764f6c46b7-r9lw6\" (UID: \"a6afcdc1-13a0-4020-9061-3b3adfd38e8b\") " pod="calico-system/calico-typha-764f6c46b7-r9lw6" Jan 23 01:07:44.143398 kubelet[2713]: I0123 01:07:44.143332 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a6afcdc1-13a0-4020-9061-3b3adfd38e8b-typha-certs\") pod \"calico-typha-764f6c46b7-r9lw6\" (UID: \"a6afcdc1-13a0-4020-9061-3b3adfd38e8b\") " pod="calico-system/calico-typha-764f6c46b7-r9lw6" Jan 23 01:07:44.143398 kubelet[2713]: I0123 01:07:44.143351 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcq7q\" (UniqueName: \"kubernetes.io/projected/a6afcdc1-13a0-4020-9061-3b3adfd38e8b-kube-api-access-kcq7q\") pod \"calico-typha-764f6c46b7-r9lw6\" (UID: \"a6afcdc1-13a0-4020-9061-3b3adfd38e8b\") " pod="calico-system/calico-typha-764f6c46b7-r9lw6" Jan 23 01:07:44.282610 kubelet[2713]: W0123 01:07:44.282309 2713 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:172-239-48-230" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node '172-239-48-230' and this object Jan 23 01:07:44.283004 kubelet[2713]: I0123 01:07:44.282309 2713 status_manager.go:890] "Failed to get status for pod" podUID="3256ed18-371a-4668-89dc-37565734cc9a" pod="calico-system/calico-node-s7ksf" err="pods \"calico-node-s7ksf\" is forbidden: User \"system:node:172-239-48-230\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-239-48-230' and this object" Jan 23 01:07:44.283004 kubelet[2713]: E0123 01:07:44.282856 2713 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:172-239-48-230\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-239-48-230' and this object" logger="UnhandledError" Jan 23 01:07:44.288897 systemd[1]: Created slice kubepods-besteffort-pod3256ed18_371a_4668_89dc_37565734cc9a.slice - libcontainer container kubepods-besteffort-pod3256ed18_371a_4668_89dc_37565734cc9a.slice. Jan 23 01:07:44.345806 kubelet[2713]: I0123 01:07:44.345495 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npvcb\" (UniqueName: \"kubernetes.io/projected/3256ed18-371a-4668-89dc-37565734cc9a-kube-api-access-npvcb\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.345806 kubelet[2713]: I0123 01:07:44.345557 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3256ed18-371a-4668-89dc-37565734cc9a-cni-log-dir\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.345806 kubelet[2713]: I0123 01:07:44.345576 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3256ed18-371a-4668-89dc-37565734cc9a-node-certs\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.345806 kubelet[2713]: I0123 01:07:44.345591 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3256ed18-371a-4668-89dc-37565734cc9a-var-lib-calico\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.345806 kubelet[2713]: I0123 01:07:44.345606 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3256ed18-371a-4668-89dc-37565734cc9a-flexvol-driver-host\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.346021 kubelet[2713]: I0123 01:07:44.345622 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3256ed18-371a-4668-89dc-37565734cc9a-policysync\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.346021 kubelet[2713]: I0123 01:07:44.345637 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3256ed18-371a-4668-89dc-37565734cc9a-var-run-calico\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.346021 kubelet[2713]: I0123 01:07:44.345652 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3256ed18-371a-4668-89dc-37565734cc9a-cni-bin-dir\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.346021 kubelet[2713]: I0123 01:07:44.345669 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3256ed18-371a-4668-89dc-37565734cc9a-lib-modules\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.346021 kubelet[2713]: I0123 01:07:44.345684 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3256ed18-371a-4668-89dc-37565734cc9a-xtables-lock\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.346143 kubelet[2713]: I0123 01:07:44.345701 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3256ed18-371a-4668-89dc-37565734cc9a-cni-net-dir\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.346143 kubelet[2713]: I0123 01:07:44.345714 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3256ed18-371a-4668-89dc-37565734cc9a-tigera-ca-bundle\") pod \"calico-node-s7ksf\" (UID: \"3256ed18-371a-4668-89dc-37565734cc9a\") " pod="calico-system/calico-node-s7ksf" Jan 23 01:07:44.394906 kubelet[2713]: E0123 01:07:44.394725 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:44.395478 containerd[1562]: time="2026-01-23T01:07:44.395393558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764f6c46b7-r9lw6,Uid:a6afcdc1-13a0-4020-9061-3b3adfd38e8b,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:44.414464 containerd[1562]: time="2026-01-23T01:07:44.414431998Z" level=info msg="connecting to shim 9cea6e1af49faa5cb63305b3243cb2082bd62d2c13a939c1b444dba1b0b5c3f9" address="unix:///run/containerd/s/4e02af2cf72c006f8b93d03222f4398e6b5e7f8c63f481399d1abb34087ead62" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:44.438231 systemd[1]: Started cri-containerd-9cea6e1af49faa5cb63305b3243cb2082bd62d2c13a939c1b444dba1b0b5c3f9.scope - libcontainer container 9cea6e1af49faa5cb63305b3243cb2082bd62d2c13a939c1b444dba1b0b5c3f9. Jan 23 01:07:44.448171 kubelet[2713]: E0123 01:07:44.448108 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.448171 kubelet[2713]: W0123 01:07:44.448126 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.448457 kubelet[2713]: E0123 01:07:44.448150 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.464235 kubelet[2713]: E0123 01:07:44.464165 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.464235 kubelet[2713]: W0123 01:07:44.464184 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.464235 kubelet[2713]: E0123 01:07:44.464202 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.469944 kubelet[2713]: E0123 01:07:44.469612 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:07:44.534699 kubelet[2713]: E0123 01:07:44.534455 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.534956 kubelet[2713]: W0123 01:07:44.534780 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.534956 kubelet[2713]: E0123 01:07:44.534803 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.536483 kubelet[2713]: E0123 01:07:44.536332 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.536483 kubelet[2713]: W0123 01:07:44.536344 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.536483 kubelet[2713]: E0123 01:07:44.536384 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.537360 kubelet[2713]: E0123 01:07:44.537247 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.537360 kubelet[2713]: W0123 01:07:44.537260 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.537360 kubelet[2713]: E0123 01:07:44.537293 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.539053 kubelet[2713]: E0123 01:07:44.538871 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.539053 kubelet[2713]: W0123 01:07:44.538883 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.539053 kubelet[2713]: E0123 01:07:44.538893 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.539336 kubelet[2713]: E0123 01:07:44.539287 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.539913 kubelet[2713]: W0123 01:07:44.539390 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.539913 kubelet[2713]: E0123 01:07:44.539404 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.540105 kubelet[2713]: E0123 01:07:44.540081 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.540294 kubelet[2713]: W0123 01:07:44.540281 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.540724 kubelet[2713]: E0123 01:07:44.540632 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.540908 kubelet[2713]: E0123 01:07:44.540817 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.540908 kubelet[2713]: W0123 01:07:44.540828 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.540908 kubelet[2713]: E0123 01:07:44.540836 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.541137 kubelet[2713]: E0123 01:07:44.541126 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.541685 kubelet[2713]: W0123 01:07:44.541670 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.541857 kubelet[2713]: E0123 01:07:44.541776 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.542789 kubelet[2713]: E0123 01:07:44.542530 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.542789 kubelet[2713]: W0123 01:07:44.542541 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.542789 kubelet[2713]: E0123 01:07:44.542550 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.543078 kubelet[2713]: E0123 01:07:44.543061 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.543078 kubelet[2713]: W0123 01:07:44.543076 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.543149 kubelet[2713]: E0123 01:07:44.543086 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.544141 kubelet[2713]: E0123 01:07:44.544117 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.544141 kubelet[2713]: W0123 01:07:44.544133 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.544141 kubelet[2713]: E0123 01:07:44.544142 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.544554 kubelet[2713]: E0123 01:07:44.544451 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.544554 kubelet[2713]: W0123 01:07:44.544465 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.544554 kubelet[2713]: E0123 01:07:44.544473 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.545268 kubelet[2713]: E0123 01:07:44.545175 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.545268 kubelet[2713]: W0123 01:07:44.545188 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.545268 kubelet[2713]: E0123 01:07:44.545196 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.545459 kubelet[2713]: E0123 01:07:44.545406 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.545459 kubelet[2713]: W0123 01:07:44.545414 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.545459 kubelet[2713]: E0123 01:07:44.545421 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.545867 kubelet[2713]: E0123 01:07:44.545835 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.545867 kubelet[2713]: W0123 01:07:44.545858 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.545867 kubelet[2713]: E0123 01:07:44.545866 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.546195 kubelet[2713]: E0123 01:07:44.546181 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.546195 kubelet[2713]: W0123 01:07:44.546190 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.546249 kubelet[2713]: E0123 01:07:44.546198 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.547049 kubelet[2713]: E0123 01:07:44.547019 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.547049 kubelet[2713]: W0123 01:07:44.547032 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.547209 kubelet[2713]: E0123 01:07:44.547185 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.547736 kubelet[2713]: E0123 01:07:44.547688 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.547736 kubelet[2713]: W0123 01:07:44.547703 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.547736 kubelet[2713]: E0123 01:07:44.547712 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.547907 kubelet[2713]: E0123 01:07:44.547886 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.547907 kubelet[2713]: W0123 01:07:44.547900 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.547966 kubelet[2713]: E0123 01:07:44.547925 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.548224 kubelet[2713]: E0123 01:07:44.548141 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.548224 kubelet[2713]: W0123 01:07:44.548150 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.548224 kubelet[2713]: E0123 01:07:44.548194 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.548512 kubelet[2713]: E0123 01:07:44.548484 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.548512 kubelet[2713]: W0123 01:07:44.548497 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.548576 kubelet[2713]: E0123 01:07:44.548505 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.548576 kubelet[2713]: I0123 01:07:44.548548 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkd7z\" (UniqueName: \"kubernetes.io/projected/fb0b4136-7548-44aa-9706-52799d45da0f-kube-api-access-dkd7z\") pod \"csi-node-driver-znvnr\" (UID: \"fb0b4136-7548-44aa-9706-52799d45da0f\") " pod="calico-system/csi-node-driver-znvnr" Jan 23 01:07:44.548874 kubelet[2713]: E0123 01:07:44.548767 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.548874 kubelet[2713]: W0123 01:07:44.548777 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.548874 kubelet[2713]: E0123 01:07:44.548785 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.548874 kubelet[2713]: I0123 01:07:44.548797 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fb0b4136-7548-44aa-9706-52799d45da0f-registration-dir\") pod \"csi-node-driver-znvnr\" (UID: \"fb0b4136-7548-44aa-9706-52799d45da0f\") " pod="calico-system/csi-node-driver-znvnr" Jan 23 01:07:44.549629 kubelet[2713]: E0123 01:07:44.549150 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.549629 kubelet[2713]: W0123 01:07:44.549168 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.549629 kubelet[2713]: E0123 01:07:44.549187 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.550937 kubelet[2713]: E0123 01:07:44.550097 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.551032 kubelet[2713]: W0123 01:07:44.551017 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.551102 kubelet[2713]: E0123 01:07:44.551090 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.551380 kubelet[2713]: E0123 01:07:44.551364 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.551380 kubelet[2713]: W0123 01:07:44.551379 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.551441 kubelet[2713]: E0123 01:07:44.551396 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.551441 kubelet[2713]: I0123 01:07:44.551414 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fb0b4136-7548-44aa-9706-52799d45da0f-socket-dir\") pod \"csi-node-driver-znvnr\" (UID: \"fb0b4136-7548-44aa-9706-52799d45da0f\") " pod="calico-system/csi-node-driver-znvnr" Jan 23 01:07:44.551610 kubelet[2713]: E0123 01:07:44.551593 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.551610 kubelet[2713]: W0123 01:07:44.551606 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.551702 kubelet[2713]: E0123 01:07:44.551682 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.551726 kubelet[2713]: I0123 01:07:44.551705 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb0b4136-7548-44aa-9706-52799d45da0f-kubelet-dir\") pod \"csi-node-driver-znvnr\" (UID: \"fb0b4136-7548-44aa-9706-52799d45da0f\") " pod="calico-system/csi-node-driver-znvnr" Jan 23 01:07:44.552006 kubelet[2713]: E0123 01:07:44.551932 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.552006 kubelet[2713]: W0123 01:07:44.551941 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.552006 kubelet[2713]: E0123 01:07:44.551967 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.552242 kubelet[2713]: E0123 01:07:44.552201 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.552242 kubelet[2713]: W0123 01:07:44.552219 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.552242 kubelet[2713]: E0123 01:07:44.552236 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.552996 kubelet[2713]: E0123 01:07:44.552495 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.552996 kubelet[2713]: W0123 01:07:44.552627 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.552996 kubelet[2713]: E0123 01:07:44.552669 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.553095 kubelet[2713]: I0123 01:07:44.553051 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fb0b4136-7548-44aa-9706-52799d45da0f-varrun\") pod \"csi-node-driver-znvnr\" (UID: \"fb0b4136-7548-44aa-9706-52799d45da0f\") " pod="calico-system/csi-node-driver-znvnr" Jan 23 01:07:44.553215 kubelet[2713]: E0123 01:07:44.553188 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.553215 kubelet[2713]: W0123 01:07:44.553200 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.553215 kubelet[2713]: E0123 01:07:44.553214 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.553539 kubelet[2713]: E0123 01:07:44.553459 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.553539 kubelet[2713]: W0123 01:07:44.553469 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.553539 kubelet[2713]: E0123 01:07:44.553477 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.553755 kubelet[2713]: E0123 01:07:44.553738 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.553755 kubelet[2713]: W0123 01:07:44.553751 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.553810 kubelet[2713]: E0123 01:07:44.553768 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.554137 kubelet[2713]: E0123 01:07:44.554028 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.554137 kubelet[2713]: W0123 01:07:44.554051 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.554137 kubelet[2713]: E0123 01:07:44.554060 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.555012 kubelet[2713]: E0123 01:07:44.554297 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.555012 kubelet[2713]: W0123 01:07:44.554502 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.555012 kubelet[2713]: E0123 01:07:44.554510 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.555012 kubelet[2713]: E0123 01:07:44.554761 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.555012 kubelet[2713]: W0123 01:07:44.554769 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.555012 kubelet[2713]: E0123 01:07:44.554776 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.561292 containerd[1562]: time="2026-01-23T01:07:44.561179156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-764f6c46b7-r9lw6,Uid:a6afcdc1-13a0-4020-9061-3b3adfd38e8b,Namespace:calico-system,Attempt:0,} returns sandbox id \"9cea6e1af49faa5cb63305b3243cb2082bd62d2c13a939c1b444dba1b0b5c3f9\"" Jan 23 01:07:44.562280 kubelet[2713]: E0123 01:07:44.562229 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:44.564138 containerd[1562]: time="2026-01-23T01:07:44.564078559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 01:07:44.656385 kubelet[2713]: E0123 01:07:44.654830 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.656385 kubelet[2713]: W0123 01:07:44.654856 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.656385 kubelet[2713]: E0123 01:07:44.654877 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.656751 kubelet[2713]: E0123 01:07:44.656710 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.656751 kubelet[2713]: W0123 01:07:44.656725 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.656877 kubelet[2713]: E0123 01:07:44.656856 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.657265 kubelet[2713]: E0123 01:07:44.657248 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.657265 kubelet[2713]: W0123 01:07:44.657263 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.657368 kubelet[2713]: E0123 01:07:44.657348 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.657542 kubelet[2713]: E0123 01:07:44.657523 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.657542 kubelet[2713]: W0123 01:07:44.657537 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.657998 kubelet[2713]: E0123 01:07:44.657615 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.657998 kubelet[2713]: E0123 01:07:44.657775 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.657998 kubelet[2713]: W0123 01:07:44.657782 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.657998 kubelet[2713]: E0123 01:07:44.657794 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.657998 kubelet[2713]: E0123 01:07:44.657956 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.657998 kubelet[2713]: W0123 01:07:44.657963 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.658153 kubelet[2713]: E0123 01:07:44.658121 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.658196 kubelet[2713]: E0123 01:07:44.658181 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.658196 kubelet[2713]: W0123 01:07:44.658193 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.658238 kubelet[2713]: E0123 01:07:44.658213 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.658459 kubelet[2713]: E0123 01:07:44.658441 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.658459 kubelet[2713]: W0123 01:07:44.658451 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.658510 kubelet[2713]: E0123 01:07:44.658472 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.658704 kubelet[2713]: E0123 01:07:44.658683 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.658704 kubelet[2713]: W0123 01:07:44.658695 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.658793 kubelet[2713]: E0123 01:07:44.658779 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.658929 kubelet[2713]: E0123 01:07:44.658916 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.658929 kubelet[2713]: W0123 01:07:44.658926 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.659019 kubelet[2713]: E0123 01:07:44.659003 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.659173 kubelet[2713]: E0123 01:07:44.659161 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.659173 kubelet[2713]: W0123 01:07:44.659172 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.659335 kubelet[2713]: E0123 01:07:44.659254 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.659389 kubelet[2713]: E0123 01:07:44.659375 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.659389 kubelet[2713]: W0123 01:07:44.659386 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.659491 kubelet[2713]: E0123 01:07:44.659465 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.659650 kubelet[2713]: E0123 01:07:44.659637 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.659650 kubelet[2713]: W0123 01:07:44.659647 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.659782 kubelet[2713]: E0123 01:07:44.659728 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.659832 kubelet[2713]: E0123 01:07:44.659819 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.659832 kubelet[2713]: W0123 01:07:44.659829 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.659874 kubelet[2713]: E0123 01:07:44.659849 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.660087 kubelet[2713]: E0123 01:07:44.660073 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.660087 kubelet[2713]: W0123 01:07:44.660085 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.660139 kubelet[2713]: E0123 01:07:44.660105 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.660366 kubelet[2713]: E0123 01:07:44.660352 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.660366 kubelet[2713]: W0123 01:07:44.660364 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.660455 kubelet[2713]: E0123 01:07:44.660442 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.660611 kubelet[2713]: E0123 01:07:44.660598 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.660611 kubelet[2713]: W0123 01:07:44.660609 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.660740 kubelet[2713]: E0123 01:07:44.660686 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.660787 kubelet[2713]: E0123 01:07:44.660775 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.660787 kubelet[2713]: W0123 01:07:44.660785 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.660880 kubelet[2713]: E0123 01:07:44.660867 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.661087 kubelet[2713]: E0123 01:07:44.661073 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.661087 kubelet[2713]: W0123 01:07:44.661085 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.661184 kubelet[2713]: E0123 01:07:44.661167 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.661662 kubelet[2713]: E0123 01:07:44.661648 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.661662 kubelet[2713]: W0123 01:07:44.661660 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.661780 kubelet[2713]: E0123 01:07:44.661766 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.661964 kubelet[2713]: E0123 01:07:44.661915 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.661964 kubelet[2713]: W0123 01:07:44.661927 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.662694 kubelet[2713]: E0123 01:07:44.662663 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.662923 kubelet[2713]: E0123 01:07:44.662910 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.662923 kubelet[2713]: W0123 01:07:44.662922 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.663011 kubelet[2713]: E0123 01:07:44.662939 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.663672 kubelet[2713]: E0123 01:07:44.663645 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.663672 kubelet[2713]: W0123 01:07:44.663660 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.663725 kubelet[2713]: E0123 01:07:44.663685 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.664050 kubelet[2713]: E0123 01:07:44.664036 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.664050 kubelet[2713]: W0123 01:07:44.664048 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.664156 kubelet[2713]: E0123 01:07:44.664139 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.665174 kubelet[2713]: E0123 01:07:44.665160 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.665174 kubelet[2713]: W0123 01:07:44.665172 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.665247 kubelet[2713]: E0123 01:07:44.665181 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.670607 kubelet[2713]: E0123 01:07:44.670591 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:44.670607 kubelet[2713]: W0123 01:07:44.670604 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:44.670668 kubelet[2713]: E0123 01:07:44.670613 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:44.932670 update_engine[1533]: I20260123 01:07:44.932026 1533 update_attempter.cc:509] Updating boot flags... Jan 23 01:07:45.261113 kubelet[2713]: E0123 01:07:45.261095 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:45.262449 kubelet[2713]: W0123 01:07:45.262096 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:45.262449 kubelet[2713]: E0123 01:07:45.262141 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:45.322023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594788054.mount: Deactivated successfully. Jan 23 01:07:45.492601 kubelet[2713]: E0123 01:07:45.492547 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:45.493845 containerd[1562]: time="2026-01-23T01:07:45.493464203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s7ksf,Uid:3256ed18-371a-4668-89dc-37565734cc9a,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:45.516241 containerd[1562]: time="2026-01-23T01:07:45.515751823Z" level=info msg="connecting to shim ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c" address="unix:///run/containerd/s/7b3dacf84138f69cbb93e270a4e0d2efae2f1da663f457f794a1b140e362113e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:45.565160 systemd[1]: Started cri-containerd-ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c.scope - libcontainer container ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c. Jan 23 01:07:45.625573 containerd[1562]: time="2026-01-23T01:07:45.625480598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s7ksf,Uid:3256ed18-371a-4668-89dc-37565734cc9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c\"" Jan 23 01:07:45.626669 kubelet[2713]: E0123 01:07:45.626421 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:45.953661 containerd[1562]: time="2026-01-23T01:07:45.953553788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:45.954387 containerd[1562]: time="2026-01-23T01:07:45.954334060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 01:07:45.954821 containerd[1562]: time="2026-01-23T01:07:45.954768445Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:45.956116 containerd[1562]: time="2026-01-23T01:07:45.956093338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:45.957081 containerd[1562]: time="2026-01-23T01:07:45.957056767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.392956528s" Jan 23 01:07:45.957081 containerd[1562]: time="2026-01-23T01:07:45.957079507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 01:07:45.958995 containerd[1562]: time="2026-01-23T01:07:45.958907946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:07:45.967183 containerd[1562]: time="2026-01-23T01:07:45.967161189Z" level=info msg="CreateContainer within sandbox \"9cea6e1af49faa5cb63305b3243cb2082bd62d2c13a939c1b444dba1b0b5c3f9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 01:07:45.972022 containerd[1562]: time="2026-01-23T01:07:45.972003242Z" level=info msg="Container 0d835efe13ddfd32cc063803bb1e65c5c8aaa391209cfc59d0b5f7773e62b5dd: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:45.976166 containerd[1562]: time="2026-01-23T01:07:45.976135484Z" level=info msg="CreateContainer within sandbox \"9cea6e1af49faa5cb63305b3243cb2082bd62d2c13a939c1b444dba1b0b5c3f9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0d835efe13ddfd32cc063803bb1e65c5c8aaa391209cfc59d0b5f7773e62b5dd\"" Jan 23 01:07:45.976789 containerd[1562]: time="2026-01-23T01:07:45.976719798Z" level=info msg="StartContainer for \"0d835efe13ddfd32cc063803bb1e65c5c8aaa391209cfc59d0b5f7773e62b5dd\"" Jan 23 01:07:45.978258 containerd[1562]: time="2026-01-23T01:07:45.978238410Z" level=info msg="connecting to shim 0d835efe13ddfd32cc063803bb1e65c5c8aaa391209cfc59d0b5f7773e62b5dd" address="unix:///run/containerd/s/4e02af2cf72c006f8b93d03222f4398e6b5e7f8c63f481399d1abb34087ead62" protocol=ttrpc version=3 Jan 23 01:07:46.006100 systemd[1]: Started cri-containerd-0d835efe13ddfd32cc063803bb1e65c5c8aaa391209cfc59d0b5f7773e62b5dd.scope - libcontainer container 0d835efe13ddfd32cc063803bb1e65c5c8aaa391209cfc59d0b5f7773e62b5dd. Jan 23 01:07:46.074260 containerd[1562]: time="2026-01-23T01:07:46.074215978Z" level=info msg="StartContainer for \"0d835efe13ddfd32cc063803bb1e65c5c8aaa391209cfc59d0b5f7773e62b5dd\" returns successfully" Jan 23 01:07:46.622785 kubelet[2713]: E0123 01:07:46.622163 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:07:46.695502 kubelet[2713]: E0123 01:07:46.695458 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:46.731027 containerd[1562]: time="2026-01-23T01:07:46.730194853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:46.731476 containerd[1562]: time="2026-01-23T01:07:46.730967195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 01:07:46.732359 containerd[1562]: time="2026-01-23T01:07:46.732326960Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:46.734255 containerd[1562]: time="2026-01-23T01:07:46.734205790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:46.734859 containerd[1562]: time="2026-01-23T01:07:46.734823983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 775.892597ms" Jan 23 01:07:46.734859 containerd[1562]: time="2026-01-23T01:07:46.734855192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:07:46.737256 containerd[1562]: time="2026-01-23T01:07:46.737230057Z" level=info msg="CreateContainer within sandbox \"ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:07:46.749134 containerd[1562]: time="2026-01-23T01:07:46.748295717Z" level=info msg="Container e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:46.755535 containerd[1562]: time="2026-01-23T01:07:46.755491179Z" level=info msg="CreateContainer within sandbox \"ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd\"" Jan 23 01:07:46.757332 containerd[1562]: time="2026-01-23T01:07:46.757259699Z" level=info msg="StartContainer for \"e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd\"" Jan 23 01:07:46.759428 containerd[1562]: time="2026-01-23T01:07:46.759390277Z" level=info msg="connecting to shim e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd" address="unix:///run/containerd/s/7b3dacf84138f69cbb93e270a4e0d2efae2f1da663f457f794a1b140e362113e" protocol=ttrpc version=3 Jan 23 01:07:46.765197 kubelet[2713]: E0123 01:07:46.765119 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.765197 kubelet[2713]: W0123 01:07:46.765137 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.765197 kubelet[2713]: E0123 01:07:46.765154 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.767404 kubelet[2713]: E0123 01:07:46.767034 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.767404 kubelet[2713]: W0123 01:07:46.767049 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.767404 kubelet[2713]: E0123 01:07:46.767060 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.767404 kubelet[2713]: E0123 01:07:46.767321 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.767404 kubelet[2713]: W0123 01:07:46.767350 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.767404 kubelet[2713]: E0123 01:07:46.767360 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.768040 kubelet[2713]: E0123 01:07:46.768006 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.768040 kubelet[2713]: W0123 01:07:46.768036 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.768124 kubelet[2713]: E0123 01:07:46.768060 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.768690 kubelet[2713]: E0123 01:07:46.768528 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.768690 kubelet[2713]: W0123 01:07:46.768542 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.768690 kubelet[2713]: E0123 01:07:46.768553 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.768790 kubelet[2713]: E0123 01:07:46.768752 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.768790 kubelet[2713]: W0123 01:07:46.768761 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.768790 kubelet[2713]: E0123 01:07:46.768769 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.768967 kubelet[2713]: E0123 01:07:46.768949 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.768967 kubelet[2713]: W0123 01:07:46.768964 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.768967 kubelet[2713]: E0123 01:07:46.768994 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.769261 kubelet[2713]: E0123 01:07:46.769175 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.769261 kubelet[2713]: W0123 01:07:46.769182 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.769261 kubelet[2713]: E0123 01:07:46.769190 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.769566 kubelet[2713]: E0123 01:07:46.769552 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.769566 kubelet[2713]: W0123 01:07:46.769564 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.769620 kubelet[2713]: E0123 01:07:46.769572 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.769779 kubelet[2713]: E0123 01:07:46.769757 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.769779 kubelet[2713]: W0123 01:07:46.769772 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.769779 kubelet[2713]: E0123 01:07:46.769780 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.770101 kubelet[2713]: E0123 01:07:46.769951 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.770101 kubelet[2713]: W0123 01:07:46.769959 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.770101 kubelet[2713]: E0123 01:07:46.769966 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.770364 kubelet[2713]: E0123 01:07:46.770348 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.770364 kubelet[2713]: W0123 01:07:46.770361 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.770442 kubelet[2713]: E0123 01:07:46.770370 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.771011 kubelet[2713]: E0123 01:07:46.770642 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.771011 kubelet[2713]: W0123 01:07:46.770654 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.771011 kubelet[2713]: E0123 01:07:46.770662 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.771011 kubelet[2713]: E0123 01:07:46.770936 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.771011 kubelet[2713]: W0123 01:07:46.770946 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.771011 kubelet[2713]: E0123 01:07:46.770954 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.771180 kubelet[2713]: E0123 01:07:46.771158 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.771180 kubelet[2713]: W0123 01:07:46.771173 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.771223 kubelet[2713]: E0123 01:07:46.771181 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.771487 kubelet[2713]: E0123 01:07:46.771466 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.771487 kubelet[2713]: W0123 01:07:46.771481 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.771487 kubelet[2713]: E0123 01:07:46.771489 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.771792 kubelet[2713]: E0123 01:07:46.771729 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.771792 kubelet[2713]: W0123 01:07:46.771739 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.771792 kubelet[2713]: E0123 01:07:46.771761 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.772131 kubelet[2713]: E0123 01:07:46.772112 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.772131 kubelet[2713]: W0123 01:07:46.772127 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.772289 kubelet[2713]: E0123 01:07:46.772148 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.772420 kubelet[2713]: E0123 01:07:46.772375 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.772420 kubelet[2713]: W0123 01:07:46.772389 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.772420 kubelet[2713]: E0123 01:07:46.772410 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.772652 kubelet[2713]: E0123 01:07:46.772620 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.772652 kubelet[2713]: W0123 01:07:46.772634 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.772711 kubelet[2713]: E0123 01:07:46.772655 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.772902 kubelet[2713]: E0123 01:07:46.772850 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.772902 kubelet[2713]: W0123 01:07:46.772860 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.772953 kubelet[2713]: E0123 01:07:46.772938 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.773165 kubelet[2713]: E0123 01:07:46.773098 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.773165 kubelet[2713]: W0123 01:07:46.773111 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.773226 kubelet[2713]: E0123 01:07:46.773191 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.773415 kubelet[2713]: E0123 01:07:46.773342 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.773415 kubelet[2713]: W0123 01:07:46.773353 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.773461 kubelet[2713]: E0123 01:07:46.773431 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.773625 kubelet[2713]: E0123 01:07:46.773605 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.773625 kubelet[2713]: W0123 01:07:46.773619 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.773800 kubelet[2713]: E0123 01:07:46.773630 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.774151 kubelet[2713]: E0123 01:07:46.774136 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.774151 kubelet[2713]: W0123 01:07:46.774149 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.774210 kubelet[2713]: E0123 01:07:46.774171 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.774408 kubelet[2713]: E0123 01:07:46.774373 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.774408 kubelet[2713]: W0123 01:07:46.774385 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.774408 kubelet[2713]: E0123 01:07:46.774406 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.774648 kubelet[2713]: E0123 01:07:46.774595 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.774648 kubelet[2713]: W0123 01:07:46.774606 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.774695 kubelet[2713]: E0123 01:07:46.774681 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.774951 kubelet[2713]: E0123 01:07:46.774822 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.774951 kubelet[2713]: W0123 01:07:46.774835 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.774951 kubelet[2713]: E0123 01:07:46.774855 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.775097 kubelet[2713]: E0123 01:07:46.775074 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.775097 kubelet[2713]: W0123 01:07:46.775090 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.775172 kubelet[2713]: E0123 01:07:46.775110 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.775341 kubelet[2713]: E0123 01:07:46.775319 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.775341 kubelet[2713]: W0123 01:07:46.775334 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.775341 kubelet[2713]: E0123 01:07:46.775354 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.775672 kubelet[2713]: E0123 01:07:46.775652 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.775672 kubelet[2713]: W0123 01:07:46.775667 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.775823 kubelet[2713]: E0123 01:07:46.775677 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.776082 kubelet[2713]: E0123 01:07:46.776062 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.776082 kubelet[2713]: W0123 01:07:46.776074 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.776133 kubelet[2713]: E0123 01:07:46.776096 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.776346 kubelet[2713]: E0123 01:07:46.776295 2713 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:07:46.776346 kubelet[2713]: W0123 01:07:46.776306 2713 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:07:46.776346 kubelet[2713]: E0123 01:07:46.776314 2713 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:07:46.790110 systemd[1]: Started cri-containerd-e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd.scope - libcontainer container e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd. Jan 23 01:07:46.877663 containerd[1562]: time="2026-01-23T01:07:46.876217629Z" level=info msg="StartContainer for \"e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd\" returns successfully" Jan 23 01:07:46.891824 systemd[1]: cri-containerd-e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd.scope: Deactivated successfully. Jan 23 01:07:46.896260 containerd[1562]: time="2026-01-23T01:07:46.896114444Z" level=info msg="received container exit event container_id:\"e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd\" id:\"e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd\" pid:3417 exited_at:{seconds:1769130466 nanos:895700068}" Jan 23 01:07:46.924088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e68f43df6c03c53a2242485a1e9adea1215386c2974a59b1d9d50dcd7ca8e0fd-rootfs.mount: Deactivated successfully. Jan 23 01:07:47.698588 kubelet[2713]: E0123 01:07:47.698557 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:47.699695 kubelet[2713]: I0123 01:07:47.699672 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:07:47.699913 kubelet[2713]: E0123 01:07:47.699896 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:47.703996 containerd[1562]: time="2026-01-23T01:07:47.702459099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:07:47.713240 kubelet[2713]: I0123 01:07:47.713193 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-764f6c46b7-r9lw6" podStartSLOduration=2.319155826 podStartE2EDuration="3.713147513s" podCreationTimestamp="2026-01-23 01:07:44 +0000 UTC" firstStartedPulling="2026-01-23 01:07:44.563745283 +0000 UTC m=+20.034630236" lastFinishedPulling="2026-01-23 01:07:45.95773696 +0000 UTC m=+21.428621923" observedRunningTime="2026-01-23 01:07:46.711195609 +0000 UTC m=+22.182080562" watchObservedRunningTime="2026-01-23 01:07:47.713147513 +0000 UTC m=+23.184032466" Jan 23 01:07:48.623513 kubelet[2713]: E0123 01:07:48.622995 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:07:49.399027 containerd[1562]: time="2026-01-23T01:07:49.398964033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:49.399854 containerd[1562]: time="2026-01-23T01:07:49.399744786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:07:49.400325 containerd[1562]: time="2026-01-23T01:07:49.400296551Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:49.401831 containerd[1562]: time="2026-01-23T01:07:49.401803888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:49.402685 containerd[1562]: time="2026-01-23T01:07:49.402450372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.699961613s" Jan 23 01:07:49.402685 containerd[1562]: time="2026-01-23T01:07:49.402472083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:07:49.405173 containerd[1562]: time="2026-01-23T01:07:49.405148090Z" level=info msg="CreateContainer within sandbox \"ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:07:49.413997 containerd[1562]: time="2026-01-23T01:07:49.410655402Z" level=info msg="Container 4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:49.426350 containerd[1562]: time="2026-01-23T01:07:49.426319137Z" level=info msg="CreateContainer within sandbox \"ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e\"" Jan 23 01:07:49.426677 containerd[1562]: time="2026-01-23T01:07:49.426657525Z" level=info msg="StartContainer for \"4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e\"" Jan 23 01:07:49.428156 containerd[1562]: time="2026-01-23T01:07:49.428130692Z" level=info msg="connecting to shim 4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e" address="unix:///run/containerd/s/7b3dacf84138f69cbb93e270a4e0d2efae2f1da663f457f794a1b140e362113e" protocol=ttrpc version=3 Jan 23 01:07:49.461114 systemd[1]: Started cri-containerd-4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e.scope - libcontainer container 4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e. Jan 23 01:07:49.540473 containerd[1562]: time="2026-01-23T01:07:49.540444507Z" level=info msg="StartContainer for \"4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e\" returns successfully" Jan 23 01:07:49.706503 kubelet[2713]: E0123 01:07:49.706126 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:50.010924 containerd[1562]: time="2026-01-23T01:07:50.010887315Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:07:50.015030 systemd[1]: cri-containerd-4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e.scope: Deactivated successfully. Jan 23 01:07:50.016311 systemd[1]: cri-containerd-4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e.scope: Consumed 504ms CPU time, 194.6M memory peak, 171.3M written to disk. Jan 23 01:07:50.017606 containerd[1562]: time="2026-01-23T01:07:50.017563852Z" level=info msg="received container exit event container_id:\"4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e\" id:\"4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e\" pid:3474 exited_at:{seconds:1769130470 nanos:17258125}" Jan 23 01:07:50.023211 kubelet[2713]: I0123 01:07:50.023190 2713 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:07:50.055618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c9587e16fde4072e80725c097119f902a8aa0d0a37c1e728d4106916cb28a5e-rootfs.mount: Deactivated successfully. Jan 23 01:07:50.074923 kubelet[2713]: W0123 01:07:50.074803 2713 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-239-48-230" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-239-48-230' and this object Jan 23 01:07:50.074923 kubelet[2713]: E0123 01:07:50.074839 2713 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:172-239-48-230\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-239-48-230' and this object" logger="UnhandledError" Jan 23 01:07:50.084612 systemd[1]: Created slice kubepods-besteffort-pod4157aec9_2f10_4912_b876_2bb1a760ce39.slice - libcontainer container kubepods-besteffort-pod4157aec9_2f10_4912_b876_2bb1a760ce39.slice. Jan 23 01:07:50.098723 systemd[1]: Created slice kubepods-burstable-pod43ad019e_3338_41a8_9c34_e47d883b0a20.slice - libcontainer container kubepods-burstable-pod43ad019e_3338_41a8_9c34_e47d883b0a20.slice. Jan 23 01:07:50.121292 systemd[1]: Created slice kubepods-besteffort-pod7b4c53ca_4a96_4bc4_b8fd_bec645f7d9e1.slice - libcontainer container kubepods-besteffort-pod7b4c53ca_4a96_4bc4_b8fd_bec645f7d9e1.slice. Jan 23 01:07:50.133612 systemd[1]: Created slice kubepods-burstable-pod960a44a5_6b2b_446e_b511_1fea653e9a6f.slice - libcontainer container kubepods-burstable-pod960a44a5_6b2b_446e_b511_1fea653e9a6f.slice. Jan 23 01:07:50.146821 systemd[1]: Created slice kubepods-besteffort-pod33350001_d074_4db1_9299_b1861aa3ad0b.slice - libcontainer container kubepods-besteffort-pod33350001_d074_4db1_9299_b1861aa3ad0b.slice. Jan 23 01:07:50.157013 systemd[1]: Created slice kubepods-besteffort-podab709807_4327_49bc_a89a_808c81e848bf.slice - libcontainer container kubepods-besteffort-podab709807_4327_49bc_a89a_808c81e848bf.slice. Jan 23 01:07:50.163726 systemd[1]: Created slice kubepods-besteffort-pod1f20c944_f2fa_454c_8f1a_5b6a04bf7592.slice - libcontainer container kubepods-besteffort-pod1f20c944_f2fa_454c_8f1a_5b6a04bf7592.slice. Jan 23 01:07:50.197259 kubelet[2713]: I0123 01:07:50.197182 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2sx\" (UniqueName: \"kubernetes.io/projected/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-kube-api-access-mp2sx\") pod \"whisker-7c8f4767d4-wwcg5\" (UID: \"7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1\") " pod="calico-system/whisker-7c8f4767d4-wwcg5" Jan 23 01:07:50.197259 kubelet[2713]: I0123 01:07:50.197219 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7ds7\" (UniqueName: \"kubernetes.io/projected/960a44a5-6b2b-446e-b511-1fea653e9a6f-kube-api-access-t7ds7\") pod \"coredns-668d6bf9bc-q2nnc\" (UID: \"960a44a5-6b2b-446e-b511-1fea653e9a6f\") " pod="kube-system/coredns-668d6bf9bc-q2nnc" Jan 23 01:07:50.197598 kubelet[2713]: I0123 01:07:50.197239 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab709807-4327-49bc-a89a-808c81e848bf-goldmane-ca-bundle\") pod \"goldmane-666569f655-wlzdw\" (UID: \"ab709807-4327-49bc-a89a-808c81e848bf\") " pod="calico-system/goldmane-666569f655-wlzdw" Jan 23 01:07:50.197598 kubelet[2713]: I0123 01:07:50.197466 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab709807-4327-49bc-a89a-808c81e848bf-config\") pod \"goldmane-666569f655-wlzdw\" (UID: \"ab709807-4327-49bc-a89a-808c81e848bf\") " pod="calico-system/goldmane-666569f655-wlzdw" Jan 23 01:07:50.197598 kubelet[2713]: I0123 01:07:50.197486 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfpr\" (UniqueName: \"kubernetes.io/projected/ab709807-4327-49bc-a89a-808c81e848bf-kube-api-access-ntfpr\") pod \"goldmane-666569f655-wlzdw\" (UID: \"ab709807-4327-49bc-a89a-808c81e848bf\") " pod="calico-system/goldmane-666569f655-wlzdw" Jan 23 01:07:50.197598 kubelet[2713]: I0123 01:07:50.197523 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxnbj\" (UniqueName: \"kubernetes.io/projected/43ad019e-3338-41a8-9c34-e47d883b0a20-kube-api-access-zxnbj\") pod \"coredns-668d6bf9bc-4vxkj\" (UID: \"43ad019e-3338-41a8-9c34-e47d883b0a20\") " pod="kube-system/coredns-668d6bf9bc-4vxkj" Jan 23 01:07:50.197598 kubelet[2713]: I0123 01:07:50.197542 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmkqp\" (UniqueName: \"kubernetes.io/projected/1f20c944-f2fa-454c-8f1a-5b6a04bf7592-kube-api-access-lmkqp\") pod \"calico-kube-controllers-79578dbdbf-d2s9w\" (UID: \"1f20c944-f2fa-454c-8f1a-5b6a04bf7592\") " pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" Jan 23 01:07:50.198208 kubelet[2713]: I0123 01:07:50.197561 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-whisker-ca-bundle\") pod \"whisker-7c8f4767d4-wwcg5\" (UID: \"7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1\") " pod="calico-system/whisker-7c8f4767d4-wwcg5" Jan 23 01:07:50.198208 kubelet[2713]: I0123 01:07:50.197577 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ab709807-4327-49bc-a89a-808c81e848bf-goldmane-key-pair\") pod \"goldmane-666569f655-wlzdw\" (UID: \"ab709807-4327-49bc-a89a-808c81e848bf\") " pod="calico-system/goldmane-666569f655-wlzdw" Jan 23 01:07:50.198208 kubelet[2713]: I0123 01:07:50.197906 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4157aec9-2f10-4912-b876-2bb1a760ce39-calico-apiserver-certs\") pod \"calico-apiserver-75bfc7c68c-9b4d8\" (UID: \"4157aec9-2f10-4912-b876-2bb1a760ce39\") " pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" Jan 23 01:07:50.198208 kubelet[2713]: I0123 01:07:50.197929 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43ad019e-3338-41a8-9c34-e47d883b0a20-config-volume\") pod \"coredns-668d6bf9bc-4vxkj\" (UID: \"43ad019e-3338-41a8-9c34-e47d883b0a20\") " pod="kube-system/coredns-668d6bf9bc-4vxkj" Jan 23 01:07:50.198208 kubelet[2713]: I0123 01:07:50.197962 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/960a44a5-6b2b-446e-b511-1fea653e9a6f-config-volume\") pod \"coredns-668d6bf9bc-q2nnc\" (UID: \"960a44a5-6b2b-446e-b511-1fea653e9a6f\") " pod="kube-system/coredns-668d6bf9bc-q2nnc" Jan 23 01:07:50.198332 kubelet[2713]: I0123 01:07:50.198008 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktxtc\" (UniqueName: \"kubernetes.io/projected/4157aec9-2f10-4912-b876-2bb1a760ce39-kube-api-access-ktxtc\") pod \"calico-apiserver-75bfc7c68c-9b4d8\" (UID: \"4157aec9-2f10-4912-b876-2bb1a760ce39\") " pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" Jan 23 01:07:50.198332 kubelet[2713]: I0123 01:07:50.198054 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f20c944-f2fa-454c-8f1a-5b6a04bf7592-tigera-ca-bundle\") pod \"calico-kube-controllers-79578dbdbf-d2s9w\" (UID: \"1f20c944-f2fa-454c-8f1a-5b6a04bf7592\") " pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" Jan 23 01:07:50.198332 kubelet[2713]: I0123 01:07:50.198077 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/33350001-d074-4db1-9299-b1861aa3ad0b-calico-apiserver-certs\") pod \"calico-apiserver-75bfc7c68c-flx8n\" (UID: \"33350001-d074-4db1-9299-b1861aa3ad0b\") " pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" Jan 23 01:07:50.198332 kubelet[2713]: I0123 01:07:50.198094 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98pqd\" (UniqueName: \"kubernetes.io/projected/33350001-d074-4db1-9299-b1861aa3ad0b-kube-api-access-98pqd\") pod \"calico-apiserver-75bfc7c68c-flx8n\" (UID: \"33350001-d074-4db1-9299-b1861aa3ad0b\") " pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" Jan 23 01:07:50.198494 kubelet[2713]: I0123 01:07:50.198460 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-whisker-backend-key-pair\") pod \"whisker-7c8f4767d4-wwcg5\" (UID: \"7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1\") " pod="calico-system/whisker-7c8f4767d4-wwcg5" Jan 23 01:07:50.394302 containerd[1562]: time="2026-01-23T01:07:50.393315985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75bfc7c68c-9b4d8,Uid:4157aec9-2f10-4912-b876-2bb1a760ce39,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:07:50.430042 containerd[1562]: time="2026-01-23T01:07:50.429985094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c8f4767d4-wwcg5,Uid:7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:50.457078 containerd[1562]: time="2026-01-23T01:07:50.456965560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75bfc7c68c-flx8n,Uid:33350001-d074-4db1-9299-b1861aa3ad0b,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:07:50.464643 containerd[1562]: time="2026-01-23T01:07:50.464486051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wlzdw,Uid:ab709807-4327-49bc-a89a-808c81e848bf,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:50.469227 containerd[1562]: time="2026-01-23T01:07:50.469168613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79578dbdbf-d2s9w,Uid:1f20c944-f2fa-454c-8f1a-5b6a04bf7592,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:50.529739 containerd[1562]: time="2026-01-23T01:07:50.529669414Z" level=error msg="Failed to destroy network for sandbox \"e0e76a4f1f00bc3827b24737fbcb0db64aca2603b8b4b9092905f9266542b248\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.531223 containerd[1562]: time="2026-01-23T01:07:50.531147732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75bfc7c68c-9b4d8,Uid:4157aec9-2f10-4912-b876-2bb1a760ce39,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e76a4f1f00bc3827b24737fbcb0db64aca2603b8b4b9092905f9266542b248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.531479 kubelet[2713]: E0123 01:07:50.531365 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e76a4f1f00bc3827b24737fbcb0db64aca2603b8b4b9092905f9266542b248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.531479 kubelet[2713]: E0123 01:07:50.531439 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e76a4f1f00bc3827b24737fbcb0db64aca2603b8b4b9092905f9266542b248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" Jan 23 01:07:50.531479 kubelet[2713]: E0123 01:07:50.531459 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e76a4f1f00bc3827b24737fbcb0db64aca2603b8b4b9092905f9266542b248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" Jan 23 01:07:50.531933 kubelet[2713]: E0123 01:07:50.531535 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75bfc7c68c-9b4d8_calico-apiserver(4157aec9-2f10-4912-b876-2bb1a760ce39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75bfc7c68c-9b4d8_calico-apiserver(4157aec9-2f10-4912-b876-2bb1a760ce39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0e76a4f1f00bc3827b24737fbcb0db64aca2603b8b4b9092905f9266542b248\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:07:50.535592 containerd[1562]: time="2026-01-23T01:07:50.535538028Z" level=error msg="Failed to destroy network for sandbox \"eb4d0b76df512c2522e488c198842c50b7e366106a29b4d63c0b5bff2cb8497c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.537179 containerd[1562]: time="2026-01-23T01:07:50.537106745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c8f4767d4-wwcg5,Uid:7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4d0b76df512c2522e488c198842c50b7e366106a29b4d63c0b5bff2cb8497c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.538033 kubelet[2713]: E0123 01:07:50.537493 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4d0b76df512c2522e488c198842c50b7e366106a29b4d63c0b5bff2cb8497c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.538092 kubelet[2713]: E0123 01:07:50.538035 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4d0b76df512c2522e488c198842c50b7e366106a29b4d63c0b5bff2cb8497c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c8f4767d4-wwcg5" Jan 23 01:07:50.538092 kubelet[2713]: E0123 01:07:50.538054 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb4d0b76df512c2522e488c198842c50b7e366106a29b4d63c0b5bff2cb8497c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c8f4767d4-wwcg5" Jan 23 01:07:50.538148 kubelet[2713]: E0123 01:07:50.538111 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c8f4767d4-wwcg5_calico-system(7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c8f4767d4-wwcg5_calico-system(7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb4d0b76df512c2522e488c198842c50b7e366106a29b4d63c0b5bff2cb8497c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c8f4767d4-wwcg5" podUID="7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1" Jan 23 01:07:50.587366 containerd[1562]: time="2026-01-23T01:07:50.587219069Z" level=error msg="Failed to destroy network for sandbox \"9468991c47024de0ccb4fa8176e19814adabeda2e5773031334d99261c9b3f66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.588667 containerd[1562]: time="2026-01-23T01:07:50.588625127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wlzdw,Uid:ab709807-4327-49bc-a89a-808c81e848bf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9468991c47024de0ccb4fa8176e19814adabeda2e5773031334d99261c9b3f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.589232 kubelet[2713]: E0123 01:07:50.589178 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9468991c47024de0ccb4fa8176e19814adabeda2e5773031334d99261c9b3f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.589461 kubelet[2713]: E0123 01:07:50.589358 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9468991c47024de0ccb4fa8176e19814adabeda2e5773031334d99261c9b3f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wlzdw" Jan 23 01:07:50.589620 kubelet[2713]: E0123 01:07:50.589503 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9468991c47024de0ccb4fa8176e19814adabeda2e5773031334d99261c9b3f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wlzdw" Jan 23 01:07:50.589857 kubelet[2713]: E0123 01:07:50.589795 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wlzdw_calico-system(ab709807-4327-49bc-a89a-808c81e848bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wlzdw_calico-system(ab709807-4327-49bc-a89a-808c81e848bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9468991c47024de0ccb4fa8176e19814adabeda2e5773031334d99261c9b3f66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:07:50.593202 containerd[1562]: time="2026-01-23T01:07:50.593079262Z" level=error msg="Failed to destroy network for sandbox \"3b77fdfed525af68a354a8cc1c6f162fcec34cd3f976e8296500369851998923\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.594096 containerd[1562]: time="2026-01-23T01:07:50.594067854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79578dbdbf-d2s9w,Uid:1f20c944-f2fa-454c-8f1a-5b6a04bf7592,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b77fdfed525af68a354a8cc1c6f162fcec34cd3f976e8296500369851998923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.594712 containerd[1562]: time="2026-01-23T01:07:50.594436981Z" level=error msg="Failed to destroy network for sandbox \"0e141d46ab103122033474a2813ac2369e34951633ea8a3d33e019b1f594da93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.594753 kubelet[2713]: E0123 01:07:50.594506 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b77fdfed525af68a354a8cc1c6f162fcec34cd3f976e8296500369851998923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.594753 kubelet[2713]: E0123 01:07:50.594536 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b77fdfed525af68a354a8cc1c6f162fcec34cd3f976e8296500369851998923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" Jan 23 01:07:50.594753 kubelet[2713]: E0123 01:07:50.594551 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b77fdfed525af68a354a8cc1c6f162fcec34cd3f976e8296500369851998923\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" Jan 23 01:07:50.594836 kubelet[2713]: E0123 01:07:50.594576 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79578dbdbf-d2s9w_calico-system(1f20c944-f2fa-454c-8f1a-5b6a04bf7592)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79578dbdbf-d2s9w_calico-system(1f20c944-f2fa-454c-8f1a-5b6a04bf7592)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b77fdfed525af68a354a8cc1c6f162fcec34cd3f976e8296500369851998923\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:07:50.595769 containerd[1562]: time="2026-01-23T01:07:50.595694901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75bfc7c68c-flx8n,Uid:33350001-d074-4db1-9299-b1861aa3ad0b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e141d46ab103122033474a2813ac2369e34951633ea8a3d33e019b1f594da93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.596256 kubelet[2713]: E0123 01:07:50.596116 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e141d46ab103122033474a2813ac2369e34951633ea8a3d33e019b1f594da93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.596256 kubelet[2713]: E0123 01:07:50.596187 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e141d46ab103122033474a2813ac2369e34951633ea8a3d33e019b1f594da93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" Jan 23 01:07:50.596256 kubelet[2713]: E0123 01:07:50.596214 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e141d46ab103122033474a2813ac2369e34951633ea8a3d33e019b1f594da93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" Jan 23 01:07:50.596404 kubelet[2713]: E0123 01:07:50.596252 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75bfc7c68c-flx8n_calico-apiserver(33350001-d074-4db1-9299-b1861aa3ad0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75bfc7c68c-flx8n_calico-apiserver(33350001-d074-4db1-9299-b1861aa3ad0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e141d46ab103122033474a2813ac2369e34951633ea8a3d33e019b1f594da93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:07:50.627999 systemd[1]: Created slice kubepods-besteffort-podfb0b4136_7548_44aa_9706_52799d45da0f.slice - libcontainer container kubepods-besteffort-podfb0b4136_7548_44aa_9706_52799d45da0f.slice. Jan 23 01:07:50.630777 containerd[1562]: time="2026-01-23T01:07:50.630730484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znvnr,Uid:fb0b4136-7548-44aa-9706-52799d45da0f,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:50.676252 containerd[1562]: time="2026-01-23T01:07:50.676140793Z" level=error msg="Failed to destroy network for sandbox \"7cdaababf87fe07cf9c176d00f25fa21a596e7a85bf88c9a5ef59d5997ae00f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.677524 containerd[1562]: time="2026-01-23T01:07:50.677408824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znvnr,Uid:fb0b4136-7548-44aa-9706-52799d45da0f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdaababf87fe07cf9c176d00f25fa21a596e7a85bf88c9a5ef59d5997ae00f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.678182 kubelet[2713]: E0123 01:07:50.677894 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdaababf87fe07cf9c176d00f25fa21a596e7a85bf88c9a5ef59d5997ae00f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:50.678182 kubelet[2713]: E0123 01:07:50.678058 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdaababf87fe07cf9c176d00f25fa21a596e7a85bf88c9a5ef59d5997ae00f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znvnr" Jan 23 01:07:50.678787 kubelet[2713]: E0123 01:07:50.678740 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cdaababf87fe07cf9c176d00f25fa21a596e7a85bf88c9a5ef59d5997ae00f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-znvnr" Jan 23 01:07:50.678902 kubelet[2713]: E0123 01:07:50.678831 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cdaababf87fe07cf9c176d00f25fa21a596e7a85bf88c9a5ef59d5997ae00f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:07:50.710387 kubelet[2713]: E0123 01:07:50.710361 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:50.711682 containerd[1562]: time="2026-01-23T01:07:50.711533483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:07:51.015880 kubelet[2713]: E0123 01:07:51.015605 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:51.016253 containerd[1562]: time="2026-01-23T01:07:51.016223188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vxkj,Uid:43ad019e-3338-41a8-9c34-e47d883b0a20,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:51.040265 kubelet[2713]: E0123 01:07:51.040233 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:51.042050 containerd[1562]: time="2026-01-23T01:07:51.041994660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q2nnc,Uid:960a44a5-6b2b-446e-b511-1fea653e9a6f,Namespace:kube-system,Attempt:0,}" Jan 23 01:07:51.083670 containerd[1562]: time="2026-01-23T01:07:51.083538987Z" level=error msg="Failed to destroy network for sandbox \"94c14fe243f4c22f4da58b6636b823f1d2dcc6aaeb089c1b40ab4153e45dc89b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:51.084961 containerd[1562]: time="2026-01-23T01:07:51.084898997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vxkj,Uid:43ad019e-3338-41a8-9c34-e47d883b0a20,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c14fe243f4c22f4da58b6636b823f1d2dcc6aaeb089c1b40ab4153e45dc89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:51.085487 kubelet[2713]: E0123 01:07:51.085456 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c14fe243f4c22f4da58b6636b823f1d2dcc6aaeb089c1b40ab4153e45dc89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:51.085675 kubelet[2713]: E0123 01:07:51.085638 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c14fe243f4c22f4da58b6636b823f1d2dcc6aaeb089c1b40ab4153e45dc89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4vxkj" Jan 23 01:07:51.086105 kubelet[2713]: E0123 01:07:51.085732 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c14fe243f4c22f4da58b6636b823f1d2dcc6aaeb089c1b40ab4153e45dc89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4vxkj" Jan 23 01:07:51.086198 kubelet[2713]: E0123 01:07:51.086057 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4vxkj_kube-system(43ad019e-3338-41a8-9c34-e47d883b0a20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4vxkj_kube-system(43ad019e-3338-41a8-9c34-e47d883b0a20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94c14fe243f4c22f4da58b6636b823f1d2dcc6aaeb089c1b40ab4153e45dc89b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4vxkj" podUID="43ad019e-3338-41a8-9c34-e47d883b0a20" Jan 23 01:07:51.111291 containerd[1562]: time="2026-01-23T01:07:51.111255204Z" level=error msg="Failed to destroy network for sandbox \"9e5d8b4379507c4de134fbcfc7b6f7b5c0bd06619d1a0a099bc5be8290e7ff5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:51.112223 containerd[1562]: time="2026-01-23T01:07:51.112137938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q2nnc,Uid:960a44a5-6b2b-446e-b511-1fea653e9a6f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e5d8b4379507c4de134fbcfc7b6f7b5c0bd06619d1a0a099bc5be8290e7ff5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:51.112380 kubelet[2713]: E0123 01:07:51.112346 2713 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e5d8b4379507c4de134fbcfc7b6f7b5c0bd06619d1a0a099bc5be8290e7ff5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:07:51.112462 kubelet[2713]: E0123 01:07:51.112394 2713 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e5d8b4379507c4de134fbcfc7b6f7b5c0bd06619d1a0a099bc5be8290e7ff5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q2nnc" Jan 23 01:07:51.112462 kubelet[2713]: E0123 01:07:51.112414 2713 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e5d8b4379507c4de134fbcfc7b6f7b5c0bd06619d1a0a099bc5be8290e7ff5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q2nnc" Jan 23 01:07:51.112462 kubelet[2713]: E0123 01:07:51.112449 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q2nnc_kube-system(960a44a5-6b2b-446e-b511-1fea653e9a6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q2nnc_kube-system(960a44a5-6b2b-446e-b511-1fea653e9a6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e5d8b4379507c4de134fbcfc7b6f7b5c0bd06619d1a0a099bc5be8290e7ff5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q2nnc" podUID="960a44a5-6b2b-446e-b511-1fea653e9a6f" Jan 23 01:07:51.413278 systemd[1]: run-netns-cni\x2d4d6607bf\x2de6f0\x2d0a7e\x2dea3d\x2d176a00ecaae5.mount: Deactivated successfully. Jan 23 01:07:51.413663 systemd[1]: run-netns-cni\x2dfbbbf509\x2d1691\x2d13a1\x2dc953\x2dad04dbaab5ab.mount: Deactivated successfully. Jan 23 01:07:51.413866 systemd[1]: run-netns-cni\x2dfc18bcf1\x2d6b86\x2d8437\x2dc4c7\x2d3fd44d364beb.mount: Deactivated successfully. Jan 23 01:07:51.414085 systemd[1]: run-netns-cni\x2df1ff9077\x2d7342\x2d5381\x2d2c63\x2de0987f3921e4.mount: Deactivated successfully. Jan 23 01:07:51.414295 systemd[1]: run-netns-cni\x2ddbf7237c\x2d9825\x2dfe0a\x2d68a9\x2dad31e9b390b7.mount: Deactivated successfully. Jan 23 01:07:54.310632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount374889582.mount: Deactivated successfully. Jan 23 01:07:54.341538 containerd[1562]: time="2026-01-23T01:07:54.341490786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:54.342438 containerd[1562]: time="2026-01-23T01:07:54.342179923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:07:54.342935 containerd[1562]: time="2026-01-23T01:07:54.342911598Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:54.344381 containerd[1562]: time="2026-01-23T01:07:54.344353900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:07:54.344813 containerd[1562]: time="2026-01-23T01:07:54.344792948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.633235785s" Jan 23 01:07:54.344886 containerd[1562]: time="2026-01-23T01:07:54.344871967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:07:54.362767 containerd[1562]: time="2026-01-23T01:07:54.362732086Z" level=info msg="CreateContainer within sandbox \"ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:07:54.369691 containerd[1562]: time="2026-01-23T01:07:54.369666897Z" level=info msg="Container 9865bae6a3e118c5be5fb8c03cf7e7b2a5f2e8d635f63cbb3a63c0b34f375118: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:07:54.380955 containerd[1562]: time="2026-01-23T01:07:54.380925003Z" level=info msg="CreateContainer within sandbox \"ce7f8af87a0d83ecdea60c4d43fecbb8bfce003421da2ce413e61ee5aeda9b9c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9865bae6a3e118c5be5fb8c03cf7e7b2a5f2e8d635f63cbb3a63c0b34f375118\"" Jan 23 01:07:54.382091 containerd[1562]: time="2026-01-23T01:07:54.381337881Z" level=info msg="StartContainer for \"9865bae6a3e118c5be5fb8c03cf7e7b2a5f2e8d635f63cbb3a63c0b34f375118\"" Jan 23 01:07:54.384075 containerd[1562]: time="2026-01-23T01:07:54.384041426Z" level=info msg="connecting to shim 9865bae6a3e118c5be5fb8c03cf7e7b2a5f2e8d635f63cbb3a63c0b34f375118" address="unix:///run/containerd/s/7b3dacf84138f69cbb93e270a4e0d2efae2f1da663f457f794a1b140e362113e" protocol=ttrpc version=3 Jan 23 01:07:54.438107 systemd[1]: Started cri-containerd-9865bae6a3e118c5be5fb8c03cf7e7b2a5f2e8d635f63cbb3a63c0b34f375118.scope - libcontainer container 9865bae6a3e118c5be5fb8c03cf7e7b2a5f2e8d635f63cbb3a63c0b34f375118. Jan 23 01:07:54.528878 containerd[1562]: time="2026-01-23T01:07:54.528827745Z" level=info msg="StartContainer for \"9865bae6a3e118c5be5fb8c03cf7e7b2a5f2e8d635f63cbb3a63c0b34f375118\" returns successfully" Jan 23 01:07:54.628673 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:07:54.628792 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:07:54.743400 kubelet[2713]: E0123 01:07:54.741433 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:54.791148 kubelet[2713]: I0123 01:07:54.789596 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s7ksf" podStartSLOduration=2.071013213 podStartE2EDuration="10.789581267s" podCreationTimestamp="2026-01-23 01:07:44 +0000 UTC" firstStartedPulling="2026-01-23 01:07:45.62702897 +0000 UTC m=+21.097913933" lastFinishedPulling="2026-01-23 01:07:54.345597034 +0000 UTC m=+29.816481987" observedRunningTime="2026-01-23 01:07:54.785852539 +0000 UTC m=+30.256737502" watchObservedRunningTime="2026-01-23 01:07:54.789581267 +0000 UTC m=+30.260466220" Jan 23 01:07:54.934899 kubelet[2713]: I0123 01:07:54.934849 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp2sx\" (UniqueName: \"kubernetes.io/projected/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-kube-api-access-mp2sx\") pod \"7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1\" (UID: \"7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1\") " Jan 23 01:07:54.935940 kubelet[2713]: I0123 01:07:54.935905 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-whisker-backend-key-pair\") pod \"7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1\" (UID: \"7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1\") " Jan 23 01:07:54.936183 kubelet[2713]: I0123 01:07:54.936076 2713 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-whisker-ca-bundle\") pod \"7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1\" (UID: \"7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1\") " Jan 23 01:07:54.937724 kubelet[2713]: I0123 01:07:54.937618 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1" (UID: "7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:07:54.950590 kubelet[2713]: I0123 01:07:54.950566 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-kube-api-access-mp2sx" (OuterVolumeSpecName: "kube-api-access-mp2sx") pod "7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1" (UID: "7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1"). InnerVolumeSpecName "kube-api-access-mp2sx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:07:54.950997 kubelet[2713]: I0123 01:07:54.950939 2713 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1" (UID: "7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:07:55.037326 kubelet[2713]: I0123 01:07:55.037274 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-whisker-ca-bundle\") on node \"172-239-48-230\" DevicePath \"\"" Jan 23 01:07:55.037326 kubelet[2713]: I0123 01:07:55.037298 2713 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-whisker-backend-key-pair\") on node \"172-239-48-230\" DevicePath \"\"" Jan 23 01:07:55.037326 kubelet[2713]: I0123 01:07:55.037309 2713 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mp2sx\" (UniqueName: \"kubernetes.io/projected/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1-kube-api-access-mp2sx\") on node \"172-239-48-230\" DevicePath \"\"" Jan 23 01:07:55.311028 systemd[1]: var-lib-kubelet-pods-7b4c53ca\x2d4a96\x2d4bc4\x2db8fd\x2dbec645f7d9e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmp2sx.mount: Deactivated successfully. Jan 23 01:07:55.311246 systemd[1]: var-lib-kubelet-pods-7b4c53ca\x2d4a96\x2d4bc4\x2db8fd\x2dbec645f7d9e1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 01:07:55.745668 kubelet[2713]: E0123 01:07:55.744762 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:07:55.756025 systemd[1]: Removed slice kubepods-besteffort-pod7b4c53ca_4a96_4bc4_b8fd_bec645f7d9e1.slice - libcontainer container kubepods-besteffort-pod7b4c53ca_4a96_4bc4_b8fd_bec645f7d9e1.slice. Jan 23 01:07:55.823161 systemd[1]: Created slice kubepods-besteffort-pod002ca359_8c71_4c3b_83f5_6e16f458e48e.slice - libcontainer container kubepods-besteffort-pod002ca359_8c71_4c3b_83f5_6e16f458e48e.slice. Jan 23 01:07:55.942801 kubelet[2713]: I0123 01:07:55.942772 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/002ca359-8c71-4c3b-83f5-6e16f458e48e-whisker-backend-key-pair\") pod \"whisker-5df4546f58-gg8t9\" (UID: \"002ca359-8c71-4c3b-83f5-6e16f458e48e\") " pod="calico-system/whisker-5df4546f58-gg8t9" Jan 23 01:07:55.943064 kubelet[2713]: I0123 01:07:55.943003 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cll4\" (UniqueName: \"kubernetes.io/projected/002ca359-8c71-4c3b-83f5-6e16f458e48e-kube-api-access-6cll4\") pod \"whisker-5df4546f58-gg8t9\" (UID: \"002ca359-8c71-4c3b-83f5-6e16f458e48e\") " pod="calico-system/whisker-5df4546f58-gg8t9" Jan 23 01:07:55.943064 kubelet[2713]: I0123 01:07:55.943054 2713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/002ca359-8c71-4c3b-83f5-6e16f458e48e-whisker-ca-bundle\") pod \"whisker-5df4546f58-gg8t9\" (UID: \"002ca359-8c71-4c3b-83f5-6e16f458e48e\") " pod="calico-system/whisker-5df4546f58-gg8t9" Jan 23 01:07:56.131477 containerd[1562]: time="2026-01-23T01:07:56.131338555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df4546f58-gg8t9,Uid:002ca359-8c71-4c3b-83f5-6e16f458e48e,Namespace:calico-system,Attempt:0,}" Jan 23 01:07:56.337284 systemd-networkd[1431]: cali97b0db80ee2: Link UP Jan 23 01:07:56.338785 systemd-networkd[1431]: cali97b0db80ee2: Gained carrier Jan 23 01:07:56.354902 containerd[1562]: 2026-01-23 01:07:56.180 [INFO][3935] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:07:56.354902 containerd[1562]: 2026-01-23 01:07:56.231 [INFO][3935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0 whisker-5df4546f58- calico-system 002ca359-8c71-4c3b-83f5-6e16f458e48e 873 0 2026-01-23 01:07:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5df4546f58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-48-230 whisker-5df4546f58-gg8t9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali97b0db80ee2 [] [] }} ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Namespace="calico-system" Pod="whisker-5df4546f58-gg8t9" WorkloadEndpoint="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-" Jan 23 01:07:56.354902 containerd[1562]: 2026-01-23 01:07:56.232 [INFO][3935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Namespace="calico-system" Pod="whisker-5df4546f58-gg8t9" WorkloadEndpoint="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" Jan 23 01:07:56.354902 containerd[1562]: 2026-01-23 01:07:56.279 [INFO][3947] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" HandleID="k8s-pod-network.3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Workload="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.280 [INFO][3947] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" HandleID="k8s-pod-network.3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Workload="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d950), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-48-230", "pod":"whisker-5df4546f58-gg8t9", "timestamp":"2026-01-23 01:07:56.279688283 +0000 UTC"}, Hostname:"172-239-48-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.280 [INFO][3947] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.280 [INFO][3947] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.280 [INFO][3947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-48-230' Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.287 [INFO][3947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" host="172-239-48-230" Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.293 [INFO][3947] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-48-230" Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.299 [INFO][3947] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="172-239-48-230" Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.301 [INFO][3947] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.303 [INFO][3947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:07:56.355083 containerd[1562]: 2026-01-23 01:07:56.304 [INFO][3947] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" host="172-239-48-230" Jan 23 01:07:56.355298 containerd[1562]: 2026-01-23 01:07:56.305 [INFO][3947] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588 Jan 23 01:07:56.355298 containerd[1562]: 2026-01-23 01:07:56.313 [INFO][3947] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" host="172-239-48-230" Jan 23 01:07:56.355298 containerd[1562]: 2026-01-23 01:07:56.317 [INFO][3947] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.129/26] block=192.168.103.128/26 handle="k8s-pod-network.3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" host="172-239-48-230" Jan 23 01:07:56.355298 containerd[1562]: 2026-01-23 01:07:56.318 [INFO][3947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.129/26] handle="k8s-pod-network.3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" host="172-239-48-230" Jan 23 01:07:56.355298 containerd[1562]: 2026-01-23 01:07:56.318 [INFO][3947] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:07:56.355298 containerd[1562]: 2026-01-23 01:07:56.318 [INFO][3947] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.129/26] IPv6=[] ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" HandleID="k8s-pod-network.3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Workload="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" Jan 23 01:07:56.355416 containerd[1562]: 2026-01-23 01:07:56.325 [INFO][3935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Namespace="calico-system" Pod="whisker-5df4546f58-gg8t9" WorkloadEndpoint="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0", GenerateName:"whisker-5df4546f58-", Namespace:"calico-system", SelfLink:"", UID:"002ca359-8c71-4c3b-83f5-6e16f458e48e", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5df4546f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"", Pod:"whisker-5df4546f58-gg8t9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali97b0db80ee2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:56.355416 containerd[1562]: 2026-01-23 01:07:56.325 [INFO][3935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.129/32] ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Namespace="calico-system" Pod="whisker-5df4546f58-gg8t9" WorkloadEndpoint="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" Jan 23 01:07:56.355485 containerd[1562]: 2026-01-23 01:07:56.325 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97b0db80ee2 ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Namespace="calico-system" Pod="whisker-5df4546f58-gg8t9" WorkloadEndpoint="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" Jan 23 01:07:56.355485 containerd[1562]: 2026-01-23 01:07:56.335 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Namespace="calico-system" Pod="whisker-5df4546f58-gg8t9" WorkloadEndpoint="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" Jan 23 01:07:56.355524 containerd[1562]: 2026-01-23 01:07:56.336 [INFO][3935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Namespace="calico-system" Pod="whisker-5df4546f58-gg8t9" WorkloadEndpoint="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0", GenerateName:"whisker-5df4546f58-", Namespace:"calico-system", SelfLink:"", UID:"002ca359-8c71-4c3b-83f5-6e16f458e48e", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5df4546f58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588", Pod:"whisker-5df4546f58-gg8t9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali97b0db80ee2", MAC:"e6:37:ec:35:45:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:07:56.355571 containerd[1562]: 2026-01-23 01:07:56.347 [INFO][3935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" Namespace="calico-system" Pod="whisker-5df4546f58-gg8t9" WorkloadEndpoint="172--239--48--230-k8s-whisker--5df4546f58--gg8t9-eth0" Jan 23 01:07:56.391062 containerd[1562]: time="2026-01-23T01:07:56.390894665Z" level=info msg="connecting to shim 3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588" address="unix:///run/containerd/s/888c5a3331f4df29e4efd58a92f3d9cd1768e6c0ac687c99dea8b0f6e6310b9a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:07:56.423097 systemd[1]: Started cri-containerd-3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588.scope - libcontainer container 3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588. Jan 23 01:07:56.468704 containerd[1562]: time="2026-01-23T01:07:56.468672517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5df4546f58-gg8t9,Uid:002ca359-8c71-4c3b-83f5-6e16f458e48e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3085f8a7b59e6765a446a7af8402152c98a447ddc48c1e40e5e1f458a2a1d588\"" Jan 23 01:07:56.470714 containerd[1562]: time="2026-01-23T01:07:56.470551039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:07:56.600761 containerd[1562]: time="2026-01-23T01:07:56.600699952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:56.601560 containerd[1562]: time="2026-01-23T01:07:56.601530249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:07:56.601644 containerd[1562]: time="2026-01-23T01:07:56.601596498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:07:56.601790 kubelet[2713]: E0123 01:07:56.601750 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:07:56.601841 kubelet[2713]: E0123 01:07:56.601799 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:07:56.606453 kubelet[2713]: E0123 01:07:56.606374 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7c902b39aa0b49bfb339d3ac49963bbf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6cll4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5df4546f58-gg8t9_calico-system(002ca359-8c71-4c3b-83f5-6e16f458e48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:56.609564 containerd[1562]: time="2026-01-23T01:07:56.609328821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:07:56.624025 kubelet[2713]: I0123 01:07:56.623945 2713 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1" path="/var/lib/kubelet/pods/7b4c53ca-4a96-4bc4-b8fd-bec645f7d9e1/volumes" Jan 23 01:07:56.764914 containerd[1562]: time="2026-01-23T01:07:56.764875484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:07:56.766126 containerd[1562]: time="2026-01-23T01:07:56.766037218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:07:56.766126 containerd[1562]: time="2026-01-23T01:07:56.766083259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:07:56.766265 kubelet[2713]: E0123 01:07:56.766219 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:07:56.766900 kubelet[2713]: E0123 01:07:56.766269 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:07:56.766940 kubelet[2713]: E0123 01:07:56.766391 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6cll4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5df4546f58-gg8t9_calico-system(002ca359-8c71-4c3b-83f5-6e16f458e48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:07:56.768074 kubelet[2713]: E0123 01:07:56.767854 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:07:57.749080 kubelet[2713]: E0123 01:07:57.748944 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:07:57.907129 systemd-networkd[1431]: cali97b0db80ee2: Gained IPv6LL Jan 23 01:08:01.623116 containerd[1562]: time="2026-01-23T01:08:01.623033563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wlzdw,Uid:ab709807-4327-49bc-a89a-808c81e848bf,Namespace:calico-system,Attempt:0,}" Jan 23 01:08:01.733267 systemd-networkd[1431]: cali9555eeaf9ad: Link UP Jan 23 01:08:01.734717 systemd-networkd[1431]: cali9555eeaf9ad: Gained carrier Jan 23 01:08:01.751315 containerd[1562]: 2026-01-23 01:08:01.662 [INFO][4125] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:08:01.751315 containerd[1562]: 2026-01-23 01:08:01.676 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0 goldmane-666569f655- calico-system ab709807-4327-49bc-a89a-808c81e848bf 805 0 2026-01-23 01:07:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-48-230 goldmane-666569f655-wlzdw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9555eeaf9ad [] [] }} ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Namespace="calico-system" Pod="goldmane-666569f655-wlzdw" WorkloadEndpoint="172--239--48--230-k8s-goldmane--666569f655--wlzdw-" Jan 23 01:08:01.751315 containerd[1562]: 2026-01-23 01:08:01.676 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Namespace="calico-system" Pod="goldmane-666569f655-wlzdw" WorkloadEndpoint="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" Jan 23 01:08:01.751315 containerd[1562]: 2026-01-23 01:08:01.698 [INFO][4137] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" HandleID="k8s-pod-network.a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Workload="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.698 [INFO][4137] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" HandleID="k8s-pod-network.a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Workload="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd030), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-48-230", "pod":"goldmane-666569f655-wlzdw", "timestamp":"2026-01-23 01:08:01.698813145 +0000 UTC"}, Hostname:"172-239-48-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.699 [INFO][4137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.699 [INFO][4137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.699 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-48-230' Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.706 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" host="172-239-48-230" Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.710 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-48-230" Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.713 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.715 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.717 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:01.751497 containerd[1562]: 2026-01-23 01:08:01.717 [INFO][4137] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" host="172-239-48-230" Jan 23 01:08:01.751745 containerd[1562]: 2026-01-23 01:08:01.719 [INFO][4137] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162 Jan 23 01:08:01.751745 containerd[1562]: 2026-01-23 01:08:01.722 [INFO][4137] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" host="172-239-48-230" Jan 23 01:08:01.751745 containerd[1562]: 2026-01-23 01:08:01.727 [INFO][4137] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.130/26] block=192.168.103.128/26 handle="k8s-pod-network.a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" host="172-239-48-230" Jan 23 01:08:01.751745 containerd[1562]: 2026-01-23 01:08:01.727 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.130/26] handle="k8s-pod-network.a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" host="172-239-48-230" Jan 23 01:08:01.751745 containerd[1562]: 2026-01-23 01:08:01.727 [INFO][4137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:08:01.751745 containerd[1562]: 2026-01-23 01:08:01.727 [INFO][4137] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.130/26] IPv6=[] ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" HandleID="k8s-pod-network.a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Workload="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" Jan 23 01:08:01.751965 containerd[1562]: 2026-01-23 01:08:01.730 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Namespace="calico-system" Pod="goldmane-666569f655-wlzdw" WorkloadEndpoint="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab709807-4327-49bc-a89a-808c81e848bf", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"", Pod:"goldmane-666569f655-wlzdw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9555eeaf9ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:01.751965 containerd[1562]: 2026-01-23 01:08:01.730 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.130/32] ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Namespace="calico-system" Pod="goldmane-666569f655-wlzdw" WorkloadEndpoint="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" Jan 23 01:08:01.752149 containerd[1562]: 2026-01-23 01:08:01.730 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9555eeaf9ad ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Namespace="calico-system" Pod="goldmane-666569f655-wlzdw" WorkloadEndpoint="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" Jan 23 01:08:01.752149 containerd[1562]: 2026-01-23 01:08:01.735 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Namespace="calico-system" Pod="goldmane-666569f655-wlzdw" WorkloadEndpoint="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" Jan 23 01:08:01.752215 containerd[1562]: 2026-01-23 01:08:01.735 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Namespace="calico-system" Pod="goldmane-666569f655-wlzdw" WorkloadEndpoint="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab709807-4327-49bc-a89a-808c81e848bf", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162", Pod:"goldmane-666569f655-wlzdw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9555eeaf9ad", MAC:"3e:94:3d:1a:61:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:01.752288 containerd[1562]: 2026-01-23 01:08:01.745 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" Namespace="calico-system" Pod="goldmane-666569f655-wlzdw" WorkloadEndpoint="172--239--48--230-k8s-goldmane--666569f655--wlzdw-eth0" Jan 23 01:08:01.773654 containerd[1562]: time="2026-01-23T01:08:01.773593119Z" level=info msg="connecting to shim a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162" address="unix:///run/containerd/s/964907dd4da2195092f0c46032bb66e245e6b74c1968cfbc1413f665eb387caa" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:01.809174 systemd[1]: Started cri-containerd-a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162.scope - libcontainer container a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162. Jan 23 01:08:01.855520 containerd[1562]: time="2026-01-23T01:08:01.855206255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wlzdw,Uid:ab709807-4327-49bc-a89a-808c81e848bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3a3755276d110fcd8a55c03a4834fec5a15b436afb98e2c6bfbc208004c2162\"" Jan 23 01:08:01.861033 containerd[1562]: time="2026-01-23T01:08:01.860361800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:08:01.997692 containerd[1562]: time="2026-01-23T01:08:01.997532874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:01.998678 containerd[1562]: time="2026-01-23T01:08:01.998606521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:08:01.998678 containerd[1562]: time="2026-01-23T01:08:01.998648881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:01.998923 kubelet[2713]: E0123 01:08:01.998876 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:01.998923 kubelet[2713]: E0123 01:08:01.998934 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:01.999386 kubelet[2713]: E0123 01:08:01.999074 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntfpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wlzdw_calico-system(ab709807-4327-49bc-a89a-808c81e848bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:02.000510 kubelet[2713]: E0123 01:08:02.000395 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:08:02.511946 kubelet[2713]: I0123 01:08:02.511802 2713 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:08:02.512424 kubelet[2713]: E0123 01:08:02.512396 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:02.623004 kubelet[2713]: E0123 01:08:02.622665 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:02.623860 containerd[1562]: time="2026-01-23T01:08:02.623814390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q2nnc,Uid:960a44a5-6b2b-446e-b511-1fea653e9a6f,Namespace:kube-system,Attempt:0,}" Jan 23 01:08:02.625820 containerd[1562]: time="2026-01-23T01:08:02.625775555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75bfc7c68c-9b4d8,Uid:4157aec9-2f10-4912-b876-2bb1a760ce39,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:08:02.790739 kubelet[2713]: E0123 01:08:02.790639 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:02.797116 kubelet[2713]: E0123 01:08:02.797072 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:08:02.835119 systemd-networkd[1431]: cali9555eeaf9ad: Gained IPv6LL Jan 23 01:08:02.855640 systemd-networkd[1431]: cali38df2425b0c: Link UP Jan 23 01:08:02.856736 systemd-networkd[1431]: cali38df2425b0c: Gained carrier Jan 23 01:08:02.869858 containerd[1562]: 2026-01-23 01:08:02.684 [INFO][4216] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:08:02.869858 containerd[1562]: 2026-01-23 01:08:02.704 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0 coredns-668d6bf9bc- kube-system 960a44a5-6b2b-446e-b511-1fea653e9a6f 807 0 2026-01-23 01:07:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-48-230 coredns-668d6bf9bc-q2nnc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali38df2425b0c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q2nnc" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-" Jan 23 01:08:02.869858 containerd[1562]: 2026-01-23 01:08:02.704 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q2nnc" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" Jan 23 01:08:02.869858 containerd[1562]: 2026-01-23 01:08:02.778 [INFO][4241] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" HandleID="k8s-pod-network.ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Workload="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.779 [INFO][4241] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" HandleID="k8s-pod-network.ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Workload="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000370bc0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-48-230", "pod":"coredns-668d6bf9bc-q2nnc", "timestamp":"2026-01-23 01:08:02.778240363 +0000 UTC"}, Hostname:"172-239-48-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.779 [INFO][4241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.779 [INFO][4241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.779 [INFO][4241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-48-230' Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.792 [INFO][4241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" host="172-239-48-230" Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.803 [INFO][4241] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-48-230" Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.817 [INFO][4241] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.820 [INFO][4241] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.825 [INFO][4241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:02.870507 containerd[1562]: 2026-01-23 01:08:02.825 [INFO][4241] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" host="172-239-48-230" Jan 23 01:08:02.870759 containerd[1562]: 2026-01-23 01:08:02.827 [INFO][4241] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b Jan 23 01:08:02.870759 containerd[1562]: 2026-01-23 01:08:02.831 [INFO][4241] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" host="172-239-48-230" Jan 23 01:08:02.870759 containerd[1562]: 2026-01-23 01:08:02.836 [INFO][4241] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.131/26] block=192.168.103.128/26 handle="k8s-pod-network.ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" host="172-239-48-230" Jan 23 01:08:02.870759 containerd[1562]: 2026-01-23 01:08:02.836 [INFO][4241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.131/26] handle="k8s-pod-network.ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" host="172-239-48-230" Jan 23 01:08:02.870759 containerd[1562]: 2026-01-23 01:08:02.836 [INFO][4241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:08:02.870759 containerd[1562]: 2026-01-23 01:08:02.836 [INFO][4241] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.131/26] IPv6=[] ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" HandleID="k8s-pod-network.ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Workload="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" Jan 23 01:08:02.872951 containerd[1562]: 2026-01-23 01:08:02.845 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q2nnc" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"960a44a5-6b2b-446e-b511-1fea653e9a6f", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"", Pod:"coredns-668d6bf9bc-q2nnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38df2425b0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:02.872951 containerd[1562]: 2026-01-23 01:08:02.846 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.131/32] ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q2nnc" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" Jan 23 01:08:02.872951 containerd[1562]: 2026-01-23 01:08:02.846 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38df2425b0c ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q2nnc" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" Jan 23 01:08:02.872951 containerd[1562]: 2026-01-23 01:08:02.856 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q2nnc" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" Jan 23 01:08:02.872951 containerd[1562]: 2026-01-23 01:08:02.856 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q2nnc" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"960a44a5-6b2b-446e-b511-1fea653e9a6f", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b", Pod:"coredns-668d6bf9bc-q2nnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38df2425b0c", MAC:"12:e3:dd:90:5d:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:02.872951 containerd[1562]: 2026-01-23 01:08:02.864 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" Namespace="kube-system" Pod="coredns-668d6bf9bc-q2nnc" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--q2nnc-eth0" Jan 23 01:08:02.905657 containerd[1562]: time="2026-01-23T01:08:02.905522775Z" level=info msg="connecting to shim ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b" address="unix:///run/containerd/s/913e0cb58386fb2a5d4c82a947ad6d7d8b1f63dc782c69b676e4affdcfeb3354" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:02.966111 systemd[1]: Started cri-containerd-ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b.scope - libcontainer container ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b. Jan 23 01:08:02.972656 systemd-networkd[1431]: cali6d67e53e0d6: Link UP Jan 23 01:08:02.974640 systemd-networkd[1431]: cali6d67e53e0d6: Gained carrier Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.707 [INFO][4226] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.731 [INFO][4226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0 calico-apiserver-75bfc7c68c- calico-apiserver 4157aec9-2f10-4912-b876-2bb1a760ce39 796 0 2026-01-23 01:07:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75bfc7c68c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-48-230 calico-apiserver-75bfc7c68c-9b4d8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6d67e53e0d6 [] [] }} ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-9b4d8" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.731 [INFO][4226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-9b4d8" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.807 [INFO][4253] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" HandleID="k8s-pod-network.302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Workload="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.807 [INFO][4253] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" HandleID="k8s-pod-network.302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Workload="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5cb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-48-230", "pod":"calico-apiserver-75bfc7c68c-9b4d8", "timestamp":"2026-01-23 01:08:02.807399087 +0000 UTC"}, Hostname:"172-239-48-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.808 [INFO][4253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.838 [INFO][4253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.838 [INFO][4253] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-48-230' Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.894 [INFO][4253] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" host="172-239-48-230" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.907 [INFO][4253] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-48-230" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.913 [INFO][4253] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.915 [INFO][4253] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.918 [INFO][4253] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.918 [INFO][4253] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" host="172-239-48-230" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.930 [INFO][4253] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8 Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.937 [INFO][4253] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" host="172-239-48-230" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.950 [INFO][4253] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.132/26] block=192.168.103.128/26 handle="k8s-pod-network.302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" host="172-239-48-230" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.950 [INFO][4253] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.132/26] handle="k8s-pod-network.302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" host="172-239-48-230" Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.950 [INFO][4253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:08:02.997266 containerd[1562]: 2026-01-23 01:08:02.950 [INFO][4253] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.132/26] IPv6=[] ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" HandleID="k8s-pod-network.302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Workload="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" Jan 23 01:08:02.998674 containerd[1562]: 2026-01-23 01:08:02.966 [INFO][4226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-9b4d8" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0", GenerateName:"calico-apiserver-75bfc7c68c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4157aec9-2f10-4912-b876-2bb1a760ce39", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75bfc7c68c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"", Pod:"calico-apiserver-75bfc7c68c-9b4d8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d67e53e0d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:02.998674 containerd[1562]: 2026-01-23 01:08:02.967 [INFO][4226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.132/32] ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-9b4d8" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" Jan 23 01:08:02.998674 containerd[1562]: 2026-01-23 01:08:02.967 [INFO][4226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d67e53e0d6 ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-9b4d8" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" Jan 23 01:08:02.998674 containerd[1562]: 2026-01-23 01:08:02.974 [INFO][4226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-9b4d8" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" Jan 23 01:08:02.998674 containerd[1562]: 2026-01-23 01:08:02.976 [INFO][4226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-9b4d8" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0", GenerateName:"calico-apiserver-75bfc7c68c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4157aec9-2f10-4912-b876-2bb1a760ce39", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75bfc7c68c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8", Pod:"calico-apiserver-75bfc7c68c-9b4d8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6d67e53e0d6", MAC:"2e:29:24:fa:bd:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:02.998674 containerd[1562]: 2026-01-23 01:08:02.991 [INFO][4226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-9b4d8" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--9b4d8-eth0" Jan 23 01:08:03.031442 containerd[1562]: time="2026-01-23T01:08:03.031153350Z" level=info msg="connecting to shim 302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8" address="unix:///run/containerd/s/7ee162c406720aea8bbf0bce0ce99d1ace8dbeb4ce8c707c2570931d7b0bce8e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:03.052542 containerd[1562]: time="2026-01-23T01:08:03.052400521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q2nnc,Uid:960a44a5-6b2b-446e-b511-1fea653e9a6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b\"" Jan 23 01:08:03.055111 kubelet[2713]: E0123 01:08:03.055095 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:03.057816 containerd[1562]: time="2026-01-23T01:08:03.057777549Z" level=info msg="CreateContainer within sandbox \"ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:08:03.065252 containerd[1562]: time="2026-01-23T01:08:03.065210362Z" level=info msg="Container 9e14525fb957b8a4188aa42a0df0bdb89f7c27c9ddfc669a731a421984a1ba28: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:08:03.069101 containerd[1562]: time="2026-01-23T01:08:03.069082334Z" level=info msg="CreateContainer within sandbox \"ee7cb57f38bf6544958442ad59f236a725c1cb94c89feedfcf598c2d018a540b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e14525fb957b8a4188aa42a0df0bdb89f7c27c9ddfc669a731a421984a1ba28\"" Jan 23 01:08:03.070001 containerd[1562]: time="2026-01-23T01:08:03.069860531Z" level=info msg="StartContainer for \"9e14525fb957b8a4188aa42a0df0bdb89f7c27c9ddfc669a731a421984a1ba28\"" Jan 23 01:08:03.072131 containerd[1562]: time="2026-01-23T01:08:03.072111247Z" level=info msg="connecting to shim 9e14525fb957b8a4188aa42a0df0bdb89f7c27c9ddfc669a731a421984a1ba28" address="unix:///run/containerd/s/913e0cb58386fb2a5d4c82a947ad6d7d8b1f63dc782c69b676e4affdcfeb3354" protocol=ttrpc version=3 Jan 23 01:08:03.077136 systemd[1]: Started cri-containerd-302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8.scope - libcontainer container 302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8. Jan 23 01:08:03.096106 systemd[1]: Started cri-containerd-9e14525fb957b8a4188aa42a0df0bdb89f7c27c9ddfc669a731a421984a1ba28.scope - libcontainer container 9e14525fb957b8a4188aa42a0df0bdb89f7c27c9ddfc669a731a421984a1ba28. Jan 23 01:08:03.148759 containerd[1562]: time="2026-01-23T01:08:03.148454781Z" level=info msg="StartContainer for \"9e14525fb957b8a4188aa42a0df0bdb89f7c27c9ddfc669a731a421984a1ba28\" returns successfully" Jan 23 01:08:03.152633 containerd[1562]: time="2026-01-23T01:08:03.152610483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75bfc7c68c-9b4d8,Uid:4157aec9-2f10-4912-b876-2bb1a760ce39,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"302a40dcef80b2510fc1dc4be8aa446ceb20afb7d633ec55335b56e7b562e4a8\"" Jan 23 01:08:03.154260 containerd[1562]: time="2026-01-23T01:08:03.154242889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:03.301767 containerd[1562]: time="2026-01-23T01:08:03.301709961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:03.302448 containerd[1562]: time="2026-01-23T01:08:03.302421979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:03.302540 containerd[1562]: time="2026-01-23T01:08:03.302486649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:03.302710 kubelet[2713]: E0123 01:08:03.302626 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:03.302710 kubelet[2713]: E0123 01:08:03.302695 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:03.304074 kubelet[2713]: E0123 01:08:03.304024 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktxtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75bfc7c68c-9b4d8_calico-apiserver(4157aec9-2f10-4912-b876-2bb1a760ce39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:03.305223 kubelet[2713]: E0123 01:08:03.305185 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:08:03.623997 containerd[1562]: time="2026-01-23T01:08:03.623538494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79578dbdbf-d2s9w,Uid:1f20c944-f2fa-454c-8f1a-5b6a04bf7592,Namespace:calico-system,Attempt:0,}" Jan 23 01:08:03.750959 systemd-networkd[1431]: cali3daf2fd6455: Link UP Jan 23 01:08:03.751690 systemd-networkd[1431]: cali3daf2fd6455: Gained carrier Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.674 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0 calico-kube-controllers-79578dbdbf- calico-system 1f20c944-f2fa-454c-8f1a-5b6a04bf7592 803 0 2026-01-23 01:07:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79578dbdbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-48-230 calico-kube-controllers-79578dbdbf-d2s9w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3daf2fd6455 [] [] }} ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Namespace="calico-system" Pod="calico-kube-controllers-79578dbdbf-d2s9w" WorkloadEndpoint="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.675 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Namespace="calico-system" Pod="calico-kube-controllers-79578dbdbf-d2s9w" WorkloadEndpoint="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.709 [INFO][4461] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" HandleID="k8s-pod-network.70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Workload="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.710 [INFO][4461] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" HandleID="k8s-pod-network.70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Workload="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e10), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-48-230", "pod":"calico-kube-controllers-79578dbdbf-d2s9w", "timestamp":"2026-01-23 01:08:03.709703158 +0000 UTC"}, Hostname:"172-239-48-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.710 [INFO][4461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.710 [INFO][4461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.710 [INFO][4461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-48-230' Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.718 [INFO][4461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" host="172-239-48-230" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.723 [INFO][4461] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-48-230" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.728 [INFO][4461] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.729 [INFO][4461] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.732 [INFO][4461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.732 [INFO][4461] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" host="172-239-48-230" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.733 [INFO][4461] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345 Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.737 [INFO][4461] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" host="172-239-48-230" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.744 [INFO][4461] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.133/26] block=192.168.103.128/26 handle="k8s-pod-network.70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" host="172-239-48-230" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.744 [INFO][4461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.133/26] handle="k8s-pod-network.70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" host="172-239-48-230" Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.744 [INFO][4461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:08:03.767374 containerd[1562]: 2026-01-23 01:08:03.744 [INFO][4461] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.133/26] IPv6=[] ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" HandleID="k8s-pod-network.70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Workload="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" Jan 23 01:08:03.767881 containerd[1562]: 2026-01-23 01:08:03.746 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Namespace="calico-system" Pod="calico-kube-controllers-79578dbdbf-d2s9w" WorkloadEndpoint="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0", GenerateName:"calico-kube-controllers-79578dbdbf-", Namespace:"calico-system", SelfLink:"", UID:"1f20c944-f2fa-454c-8f1a-5b6a04bf7592", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79578dbdbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"", Pod:"calico-kube-controllers-79578dbdbf-d2s9w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3daf2fd6455", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:03.767881 containerd[1562]: 2026-01-23 01:08:03.747 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.133/32] ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Namespace="calico-system" Pod="calico-kube-controllers-79578dbdbf-d2s9w" WorkloadEndpoint="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" Jan 23 01:08:03.767881 containerd[1562]: 2026-01-23 01:08:03.747 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3daf2fd6455 ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Namespace="calico-system" Pod="calico-kube-controllers-79578dbdbf-d2s9w" WorkloadEndpoint="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" Jan 23 01:08:03.767881 containerd[1562]: 2026-01-23 01:08:03.751 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Namespace="calico-system" Pod="calico-kube-controllers-79578dbdbf-d2s9w" WorkloadEndpoint="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" Jan 23 01:08:03.767881 containerd[1562]: 2026-01-23 01:08:03.753 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Namespace="calico-system" Pod="calico-kube-controllers-79578dbdbf-d2s9w" WorkloadEndpoint="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0", GenerateName:"calico-kube-controllers-79578dbdbf-", Namespace:"calico-system", SelfLink:"", UID:"1f20c944-f2fa-454c-8f1a-5b6a04bf7592", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79578dbdbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345", Pod:"calico-kube-controllers-79578dbdbf-d2s9w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3daf2fd6455", MAC:"36:c4:93:85:5c:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:03.767881 containerd[1562]: 2026-01-23 01:08:03.764 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" Namespace="calico-system" Pod="calico-kube-controllers-79578dbdbf-d2s9w" WorkloadEndpoint="172--239--48--230-k8s-calico--kube--controllers--79578dbdbf--d2s9w-eth0" Jan 23 01:08:03.793794 containerd[1562]: time="2026-01-23T01:08:03.793744165Z" level=info msg="connecting to shim 70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345" address="unix:///run/containerd/s/b77dde2ca81b9598246c662f2e4f4c4d3255d980a71efa80ef6f1c36e25113db" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:03.794153 kubelet[2713]: E0123 01:08:03.794112 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:08:03.798558 kubelet[2713]: E0123 01:08:03.798464 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:08:03.798935 kubelet[2713]: E0123 01:08:03.798545 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:03.853225 systemd[1]: Started cri-containerd-70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345.scope - libcontainer container 70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345. Jan 23 01:08:03.874486 kubelet[2713]: I0123 01:08:03.873929 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q2nnc" podStartSLOduration=32.873915361 podStartE2EDuration="32.873915361s" podCreationTimestamp="2026-01-23 01:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:08:03.871751647 +0000 UTC m=+39.342636600" watchObservedRunningTime="2026-01-23 01:08:03.873915361 +0000 UTC m=+39.344800314" Jan 23 01:08:03.945544 containerd[1562]: time="2026-01-23T01:08:03.945474257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79578dbdbf-d2s9w,Uid:1f20c944-f2fa-454c-8f1a-5b6a04bf7592,Namespace:calico-system,Attempt:0,} returns sandbox id \"70c362964a97171cb8007722ac18f314b33150209090be8505df6d4fe40c7345\"" Jan 23 01:08:03.947760 containerd[1562]: time="2026-01-23T01:08:03.947713283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:08:03.989123 systemd-networkd[1431]: cali38df2425b0c: Gained IPv6LL Jan 23 01:08:04.057324 systemd-networkd[1431]: vxlan.calico: Link UP Jan 23 01:08:04.058052 systemd-networkd[1431]: vxlan.calico: Gained carrier Jan 23 01:08:04.128936 containerd[1562]: time="2026-01-23T01:08:04.128735063Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:04.130215 containerd[1562]: time="2026-01-23T01:08:04.130171969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:08:04.130276 containerd[1562]: time="2026-01-23T01:08:04.130258490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:04.130767 kubelet[2713]: E0123 01:08:04.130499 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:04.132082 kubelet[2713]: E0123 01:08:04.130770 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:04.132309 kubelet[2713]: E0123 01:08:04.132249 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmkqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79578dbdbf-d2s9w_calico-system(1f20c944-f2fa-454c-8f1a-5b6a04bf7592): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:04.133427 kubelet[2713]: E0123 01:08:04.133390 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:08:04.244216 systemd-networkd[1431]: cali6d67e53e0d6: Gained IPv6LL Jan 23 01:08:04.622797 containerd[1562]: time="2026-01-23T01:08:04.622755523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75bfc7c68c-flx8n,Uid:33350001-d074-4db1-9299-b1861aa3ad0b,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:08:04.721438 systemd-networkd[1431]: calib3bd53c3801: Link UP Jan 23 01:08:04.722543 systemd-networkd[1431]: calib3bd53c3801: Gained carrier Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.657 [INFO][4597] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0 calico-apiserver-75bfc7c68c- calico-apiserver 33350001-d074-4db1-9299-b1861aa3ad0b 804 0 2026-01-23 01:07:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75bfc7c68c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-48-230 calico-apiserver-75bfc7c68c-flx8n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib3bd53c3801 [] [] }} ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-flx8n" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.657 [INFO][4597] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-flx8n" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.683 [INFO][4609] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" HandleID="k8s-pod-network.eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Workload="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.683 [INFO][4609] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" HandleID="k8s-pod-network.eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Workload="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-48-230", "pod":"calico-apiserver-75bfc7c68c-flx8n", "timestamp":"2026-01-23 01:08:04.683404791 +0000 UTC"}, Hostname:"172-239-48-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.683 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.683 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.683 [INFO][4609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-48-230' Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.691 [INFO][4609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" host="172-239-48-230" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.698 [INFO][4609] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-48-230" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.701 [INFO][4609] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.703 [INFO][4609] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.705 [INFO][4609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.705 [INFO][4609] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" host="172-239-48-230" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.706 [INFO][4609] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8 Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.709 [INFO][4609] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" host="172-239-48-230" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.715 [INFO][4609] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.134/26] block=192.168.103.128/26 handle="k8s-pod-network.eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" host="172-239-48-230" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.715 [INFO][4609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.134/26] handle="k8s-pod-network.eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" host="172-239-48-230" Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.715 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:08:04.738003 containerd[1562]: 2026-01-23 01:08:04.715 [INFO][4609] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.134/26] IPv6=[] ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" HandleID="k8s-pod-network.eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Workload="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" Jan 23 01:08:04.739218 containerd[1562]: 2026-01-23 01:08:04.718 [INFO][4597] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-flx8n" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0", GenerateName:"calico-apiserver-75bfc7c68c-", Namespace:"calico-apiserver", SelfLink:"", UID:"33350001-d074-4db1-9299-b1861aa3ad0b", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75bfc7c68c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"", Pod:"calico-apiserver-75bfc7c68c-flx8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3bd53c3801", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:04.739218 containerd[1562]: 2026-01-23 01:08:04.718 [INFO][4597] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.134/32] ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-flx8n" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" Jan 23 01:08:04.739218 containerd[1562]: 2026-01-23 01:08:04.718 [INFO][4597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3bd53c3801 ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-flx8n" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" Jan 23 01:08:04.739218 containerd[1562]: 2026-01-23 01:08:04.723 [INFO][4597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-flx8n" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" Jan 23 01:08:04.739218 containerd[1562]: 2026-01-23 01:08:04.724 [INFO][4597] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-flx8n" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0", GenerateName:"calico-apiserver-75bfc7c68c-", Namespace:"calico-apiserver", SelfLink:"", UID:"33350001-d074-4db1-9299-b1861aa3ad0b", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75bfc7c68c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8", Pod:"calico-apiserver-75bfc7c68c-flx8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3bd53c3801", MAC:"32:ad:ff:92:80:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:04.739218 containerd[1562]: 2026-01-23 01:08:04.734 [INFO][4597] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" Namespace="calico-apiserver" Pod="calico-apiserver-75bfc7c68c-flx8n" WorkloadEndpoint="172--239--48--230-k8s-calico--apiserver--75bfc7c68c--flx8n-eth0" Jan 23 01:08:04.765619 containerd[1562]: time="2026-01-23T01:08:04.765040476Z" level=info msg="connecting to shim eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8" address="unix:///run/containerd/s/1971e649dc08619359bf7aaf08fc4aacd0872510830815945b7ed18c0848257b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:04.799548 kubelet[2713]: E0123 01:08:04.799515 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:04.800994 kubelet[2713]: E0123 01:08:04.800810 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:08:04.800994 kubelet[2713]: E0123 01:08:04.800894 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:08:04.803202 systemd[1]: Started cri-containerd-eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8.scope - libcontainer container eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8. Jan 23 01:08:04.889374 containerd[1562]: time="2026-01-23T01:08:04.889245585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75bfc7c68c-flx8n,Uid:33350001-d074-4db1-9299-b1861aa3ad0b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eeef76b9e3232f26e695c5c2d0da0f5d3b0f78e06038cde0a989381bdff4a7a8\"" Jan 23 01:08:04.892104 containerd[1562]: time="2026-01-23T01:08:04.891951760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:05.053994 containerd[1562]: time="2026-01-23T01:08:05.053934716Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:05.054844 containerd[1562]: time="2026-01-23T01:08:05.054753864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:05.054844 containerd[1562]: time="2026-01-23T01:08:05.054795234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:05.055020 kubelet[2713]: E0123 01:08:05.054941 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:05.055218 kubelet[2713]: E0123 01:08:05.055019 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:05.055520 kubelet[2713]: E0123 01:08:05.055134 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98pqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75bfc7c68c-flx8n_calico-apiserver(33350001-d074-4db1-9299-b1861aa3ad0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:05.061071 kubelet[2713]: E0123 01:08:05.061042 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:08:05.075103 systemd-networkd[1431]: cali3daf2fd6455: Gained IPv6LL Jan 23 01:08:05.621525 kubelet[2713]: E0123 01:08:05.621488 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:05.622684 containerd[1562]: time="2026-01-23T01:08:05.622219750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znvnr,Uid:fb0b4136-7548-44aa-9706-52799d45da0f,Namespace:calico-system,Attempt:0,}" Jan 23 01:08:05.622865 containerd[1562]: time="2026-01-23T01:08:05.622842648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vxkj,Uid:43ad019e-3338-41a8-9c34-e47d883b0a20,Namespace:kube-system,Attempt:0,}" Jan 23 01:08:05.735330 systemd-networkd[1431]: cali3276732c7e6: Link UP Jan 23 01:08:05.736813 systemd-networkd[1431]: cali3276732c7e6: Gained carrier Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.673 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--48--230-k8s-csi--node--driver--znvnr-eth0 csi-node-driver- calico-system fb0b4136-7548-44aa-9706-52799d45da0f 699 0 2026-01-23 01:07:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-48-230 csi-node-driver-znvnr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3276732c7e6 [] [] }} ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Namespace="calico-system" Pod="csi-node-driver-znvnr" WorkloadEndpoint="172--239--48--230-k8s-csi--node--driver--znvnr-" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.674 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Namespace="calico-system" Pod="csi-node-driver-znvnr" WorkloadEndpoint="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.702 [INFO][4706] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" HandleID="k8s-pod-network.85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Workload="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.702 [INFO][4706] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" HandleID="k8s-pod-network.85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Workload="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-48-230", "pod":"csi-node-driver-znvnr", "timestamp":"2026-01-23 01:08:05.702304038 +0000 UTC"}, Hostname:"172-239-48-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.702 [INFO][4706] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.702 [INFO][4706] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.702 [INFO][4706] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-48-230' Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.709 [INFO][4706] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" host="172-239-48-230" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.712 [INFO][4706] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-48-230" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.716 [INFO][4706] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.718 [INFO][4706] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.720 [INFO][4706] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.720 [INFO][4706] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" host="172-239-48-230" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.721 [INFO][4706] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349 Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.725 [INFO][4706] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" host="172-239-48-230" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.729 [INFO][4706] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.135/26] block=192.168.103.128/26 handle="k8s-pod-network.85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" host="172-239-48-230" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.729 [INFO][4706] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.135/26] handle="k8s-pod-network.85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" host="172-239-48-230" Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.729 [INFO][4706] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:08:05.754538 containerd[1562]: 2026-01-23 01:08:05.729 [INFO][4706] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.135/26] IPv6=[] ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" HandleID="k8s-pod-network.85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Workload="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" Jan 23 01:08:05.755291 containerd[1562]: 2026-01-23 01:08:05.731 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Namespace="calico-system" Pod="csi-node-driver-znvnr" WorkloadEndpoint="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-csi--node--driver--znvnr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb0b4136-7548-44aa-9706-52799d45da0f", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"", Pod:"csi-node-driver-znvnr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3276732c7e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:05.755291 containerd[1562]: 2026-01-23 01:08:05.731 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.135/32] ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Namespace="calico-system" Pod="csi-node-driver-znvnr" WorkloadEndpoint="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" Jan 23 01:08:05.755291 containerd[1562]: 2026-01-23 01:08:05.732 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3276732c7e6 ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Namespace="calico-system" Pod="csi-node-driver-znvnr" WorkloadEndpoint="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" Jan 23 01:08:05.755291 containerd[1562]: 2026-01-23 01:08:05.737 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Namespace="calico-system" Pod="csi-node-driver-znvnr" WorkloadEndpoint="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" Jan 23 01:08:05.755291 containerd[1562]: 2026-01-23 01:08:05.738 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Namespace="calico-system" Pod="csi-node-driver-znvnr" WorkloadEndpoint="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-csi--node--driver--znvnr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fb0b4136-7548-44aa-9706-52799d45da0f", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349", Pod:"csi-node-driver-znvnr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3276732c7e6", MAC:"6a:9b:f6:69:e1:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:05.755291 containerd[1562]: 2026-01-23 01:08:05.751 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" Namespace="calico-system" Pod="csi-node-driver-znvnr" WorkloadEndpoint="172--239--48--230-k8s-csi--node--driver--znvnr-eth0" Jan 23 01:08:05.783073 containerd[1562]: time="2026-01-23T01:08:05.783030775Z" level=info msg="connecting to shim 85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349" address="unix:///run/containerd/s/dad8c30bf8f2232013066358f1bb2f59510ff945c6df11792ebd49545adab819" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:05.807397 kubelet[2713]: E0123 01:08:05.807367 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:05.808180 kubelet[2713]: E0123 01:08:05.808045 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:08:05.809580 kubelet[2713]: E0123 01:08:05.809552 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:08:05.822105 systemd[1]: Started cri-containerd-85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349.scope - libcontainer container 85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349. Jan 23 01:08:05.874114 systemd-networkd[1431]: calib0b329cb13e: Link UP Jan 23 01:08:05.874362 systemd-networkd[1431]: calib0b329cb13e: Gained carrier Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.669 [INFO][4676] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0 coredns-668d6bf9bc- kube-system 43ad019e-3338-41a8-9c34-e47d883b0a20 800 0 2026-01-23 01:07:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-48-230 coredns-668d6bf9bc-4vxkj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib0b329cb13e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vxkj" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.669 [INFO][4676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vxkj" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.704 [INFO][4701] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" HandleID="k8s-pod-network.f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Workload="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.704 [INFO][4701] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" HandleID="k8s-pod-network.f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Workload="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f120), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-48-230", "pod":"coredns-668d6bf9bc-4vxkj", "timestamp":"2026-01-23 01:08:05.704859893 +0000 UTC"}, Hostname:"172-239-48-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.705 [INFO][4701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.729 [INFO][4701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.730 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-48-230' Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.811 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" host="172-239-48-230" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.822 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-48-230" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.837 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.839 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.841 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.128/26 host="172-239-48-230" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.841 [INFO][4701] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.103.128/26 handle="k8s-pod-network.f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" host="172-239-48-230" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.843 [INFO][4701] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2 Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.851 [INFO][4701] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.103.128/26 handle="k8s-pod-network.f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" host="172-239-48-230" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.855 [INFO][4701] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.103.136/26] block=192.168.103.128/26 handle="k8s-pod-network.f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" host="172-239-48-230" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.856 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.136/26] handle="k8s-pod-network.f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" host="172-239-48-230" Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.857 [INFO][4701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:08:05.892707 containerd[1562]: 2026-01-23 01:08:05.857 [INFO][4701] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.103.136/26] IPv6=[] ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" HandleID="k8s-pod-network.f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Workload="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" Jan 23 01:08:05.893519 containerd[1562]: 2026-01-23 01:08:05.861 [INFO][4676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vxkj" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"43ad019e-3338-41a8-9c34-e47d883b0a20", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"", Pod:"coredns-668d6bf9bc-4vxkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0b329cb13e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:05.893519 containerd[1562]: 2026-01-23 01:08:05.861 [INFO][4676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.136/32] ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vxkj" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" Jan 23 01:08:05.893519 containerd[1562]: 2026-01-23 01:08:05.861 [INFO][4676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0b329cb13e ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vxkj" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" Jan 23 01:08:05.893519 containerd[1562]: 2026-01-23 01:08:05.873 [INFO][4676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vxkj" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" Jan 23 01:08:05.893519 containerd[1562]: 2026-01-23 01:08:05.874 [INFO][4676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vxkj" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"43ad019e-3338-41a8-9c34-e47d883b0a20", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-48-230", ContainerID:"f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2", Pod:"coredns-668d6bf9bc-4vxkj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib0b329cb13e", MAC:"fa:6c:45:97:fc:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:08:05.893519 containerd[1562]: 2026-01-23 01:08:05.885 [INFO][4676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vxkj" WorkloadEndpoint="172--239--48--230-k8s-coredns--668d6bf9bc--4vxkj-eth0" Jan 23 01:08:05.928200 containerd[1562]: time="2026-01-23T01:08:05.924271375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-znvnr,Uid:fb0b4136-7548-44aa-9706-52799d45da0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"85e318c47a5288f9170db748974d960a7e1063e8bf04ddc43212f869b9441349\"" Jan 23 01:08:05.928590 containerd[1562]: time="2026-01-23T01:08:05.928565687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:08:05.944735 containerd[1562]: time="2026-01-23T01:08:05.944683299Z" level=info msg="connecting to shim f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2" address="unix:///run/containerd/s/90ab2c95c5c7cf7edcfa2db4333a846d33a7b2e30931f4299b5a2a7f23ea344f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:08:05.971091 systemd-networkd[1431]: vxlan.calico: Gained IPv6LL Jan 23 01:08:05.980089 systemd[1]: Started cri-containerd-f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2.scope - libcontainer container f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2. Jan 23 01:08:06.033321 containerd[1562]: time="2026-01-23T01:08:06.033295430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vxkj,Uid:43ad019e-3338-41a8-9c34-e47d883b0a20,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2\"" Jan 23 01:08:06.034295 kubelet[2713]: E0123 01:08:06.034277 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:06.038928 containerd[1562]: time="2026-01-23T01:08:06.038888861Z" level=info msg="CreateContainer within sandbox \"f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:08:06.050989 containerd[1562]: time="2026-01-23T01:08:06.047177549Z" level=info msg="Container 8d1eabfba27e8c80715ac94e06c4ccb13e90eb6f4121a824232615a8b9cab6f9: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:08:06.054938 containerd[1562]: time="2026-01-23T01:08:06.054906687Z" level=info msg="CreateContainer within sandbox \"f5dcf8b3479cff4d95f0afc063274259faa7670acfeba7302c1ab207dd97b4f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d1eabfba27e8c80715ac94e06c4ccb13e90eb6f4121a824232615a8b9cab6f9\"" Jan 23 01:08:06.055324 containerd[1562]: time="2026-01-23T01:08:06.055300826Z" level=info msg="StartContainer for \"8d1eabfba27e8c80715ac94e06c4ccb13e90eb6f4121a824232615a8b9cab6f9\"" Jan 23 01:08:06.055951 containerd[1562]: time="2026-01-23T01:08:06.055923735Z" level=info msg="connecting to shim 8d1eabfba27e8c80715ac94e06c4ccb13e90eb6f4121a824232615a8b9cab6f9" address="unix:///run/containerd/s/90ab2c95c5c7cf7edcfa2db4333a846d33a7b2e30931f4299b5a2a7f23ea344f" protocol=ttrpc version=3 Jan 23 01:08:06.078095 systemd[1]: Started cri-containerd-8d1eabfba27e8c80715ac94e06c4ccb13e90eb6f4121a824232615a8b9cab6f9.scope - libcontainer container 8d1eabfba27e8c80715ac94e06c4ccb13e90eb6f4121a824232615a8b9cab6f9. Jan 23 01:08:06.078706 containerd[1562]: time="2026-01-23T01:08:06.078678820Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:06.079579 containerd[1562]: time="2026-01-23T01:08:06.079450759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:08:06.079579 containerd[1562]: time="2026-01-23T01:08:06.079514239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:08:06.079696 kubelet[2713]: E0123 01:08:06.079665 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:06.079771 kubelet[2713]: E0123 01:08:06.079703 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:06.079833 kubelet[2713]: E0123 01:08:06.079787 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkd7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:06.082140 containerd[1562]: time="2026-01-23T01:08:06.081965475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:08:06.119395 containerd[1562]: time="2026-01-23T01:08:06.119345118Z" level=info msg="StartContainer for \"8d1eabfba27e8c80715ac94e06c4ccb13e90eb6f4121a824232615a8b9cab6f9\" returns successfully" Jan 23 01:08:06.219654 containerd[1562]: time="2026-01-23T01:08:06.219324824Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:06.220579 containerd[1562]: time="2026-01-23T01:08:06.220545692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:08:06.220738 containerd[1562]: time="2026-01-23T01:08:06.220617322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:08:06.221115 kubelet[2713]: E0123 01:08:06.220960 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:06.221195 kubelet[2713]: E0123 01:08:06.221132 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:06.221472 kubelet[2713]: E0123 01:08:06.221276 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkd7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:06.222482 kubelet[2713]: E0123 01:08:06.222443 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:08:06.419202 systemd-networkd[1431]: calib3bd53c3801: Gained IPv6LL Jan 23 01:08:06.808380 kubelet[2713]: E0123 01:08:06.808352 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:06.812089 kubelet[2713]: E0123 01:08:06.812006 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:08:06.813161 kubelet[2713]: E0123 01:08:06.812843 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:08:06.820728 kubelet[2713]: I0123 01:08:06.819409 2713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4vxkj" podStartSLOduration=35.819401685 podStartE2EDuration="35.819401685s" podCreationTimestamp="2026-01-23 01:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:08:06.819171824 +0000 UTC m=+42.290056777" watchObservedRunningTime="2026-01-23 01:08:06.819401685 +0000 UTC m=+42.290286648" Jan 23 01:08:07.123108 systemd-networkd[1431]: cali3276732c7e6: Gained IPv6LL Jan 23 01:08:07.124087 systemd-networkd[1431]: calib0b329cb13e: Gained IPv6LL Jan 23 01:08:07.813813 kubelet[2713]: E0123 01:08:07.813308 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:07.816114 kubelet[2713]: E0123 01:08:07.816073 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:08:08.816779 kubelet[2713]: E0123 01:08:08.816727 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:09.624440 containerd[1562]: time="2026-01-23T01:08:09.624368747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:08:09.781893 containerd[1562]: time="2026-01-23T01:08:09.781802093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:09.783123 containerd[1562]: time="2026-01-23T01:08:09.783047303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:08:09.783276 containerd[1562]: time="2026-01-23T01:08:09.783153802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:08:09.783455 kubelet[2713]: E0123 01:08:09.783372 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:09.783455 kubelet[2713]: E0123 01:08:09.783448 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:09.783638 kubelet[2713]: E0123 01:08:09.783588 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7c902b39aa0b49bfb339d3ac49963bbf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6cll4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5df4546f58-gg8t9_calico-system(002ca359-8c71-4c3b-83f5-6e16f458e48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:09.786775 containerd[1562]: time="2026-01-23T01:08:09.786737429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:08:09.942165 containerd[1562]: time="2026-01-23T01:08:09.941556298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:09.945313 containerd[1562]: time="2026-01-23T01:08:09.945228524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:08:09.945503 containerd[1562]: time="2026-01-23T01:08:09.945264554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:09.945805 kubelet[2713]: E0123 01:08:09.945758 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:09.946639 kubelet[2713]: E0123 01:08:09.946268 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:09.946639 kubelet[2713]: E0123 01:08:09.946427 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6cll4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5df4546f58-gg8t9_calico-system(002ca359-8c71-4c3b-83f5-6e16f458e48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:09.948223 kubelet[2713]: E0123 01:08:09.948143 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:08:14.625153 containerd[1562]: time="2026-01-23T01:08:14.624997755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:08:14.773229 containerd[1562]: time="2026-01-23T01:08:14.773065409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:14.774352 containerd[1562]: time="2026-01-23T01:08:14.774303099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:14.774533 containerd[1562]: time="2026-01-23T01:08:14.774323709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:08:14.774920 kubelet[2713]: E0123 01:08:14.774866 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:14.777323 kubelet[2713]: E0123 01:08:14.774936 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:14.777323 kubelet[2713]: E0123 01:08:14.775923 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntfpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wlzdw_calico-system(ab709807-4327-49bc-a89a-808c81e848bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:14.777323 kubelet[2713]: E0123 01:08:14.777068 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:08:17.624090 containerd[1562]: time="2026-01-23T01:08:17.624020176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:17.775201 containerd[1562]: time="2026-01-23T01:08:17.775133626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:17.776854 containerd[1562]: time="2026-01-23T01:08:17.776797407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:17.776997 containerd[1562]: time="2026-01-23T01:08:17.776917667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:17.777273 kubelet[2713]: E0123 01:08:17.777216 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:17.778057 kubelet[2713]: E0123 01:08:17.777291 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:17.778057 kubelet[2713]: E0123 01:08:17.777546 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktxtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75bfc7c68c-9b4d8_calico-apiserver(4157aec9-2f10-4912-b876-2bb1a760ce39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:17.786042 kubelet[2713]: E0123 01:08:17.785246 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:08:19.625535 containerd[1562]: time="2026-01-23T01:08:19.624931250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:08:19.812361 containerd[1562]: time="2026-01-23T01:08:19.812273579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:19.813802 containerd[1562]: time="2026-01-23T01:08:19.813760900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:08:19.813916 containerd[1562]: time="2026-01-23T01:08:19.813888899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:19.814177 kubelet[2713]: E0123 01:08:19.814105 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:19.815041 kubelet[2713]: E0123 01:08:19.814187 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:19.815303 containerd[1562]: time="2026-01-23T01:08:19.815271750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:08:19.816604 kubelet[2713]: E0123 01:08:19.816525 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmkqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79578dbdbf-d2s9w_calico-system(1f20c944-f2fa-454c-8f1a-5b6a04bf7592): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:19.818114 kubelet[2713]: E0123 01:08:19.817960 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:08:19.973988 containerd[1562]: time="2026-01-23T01:08:19.973816015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:19.975699 containerd[1562]: time="2026-01-23T01:08:19.975597955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:08:19.975699 containerd[1562]: time="2026-01-23T01:08:19.975634905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:08:19.975980 kubelet[2713]: E0123 01:08:19.975919 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:19.976028 kubelet[2713]: E0123 01:08:19.976010 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:19.976699 kubelet[2713]: E0123 01:08:19.976649 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkd7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:19.978698 containerd[1562]: time="2026-01-23T01:08:19.978669657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:08:20.150103 containerd[1562]: time="2026-01-23T01:08:20.150025502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:20.151878 containerd[1562]: time="2026-01-23T01:08:20.151817113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:08:20.152320 containerd[1562]: time="2026-01-23T01:08:20.151942133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:08:20.152710 kubelet[2713]: E0123 01:08:20.152279 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:20.152771 kubelet[2713]: E0123 01:08:20.152699 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:20.152889 kubelet[2713]: E0123 01:08:20.152836 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkd7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:20.154264 kubelet[2713]: E0123 01:08:20.154195 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:08:20.624366 kubelet[2713]: E0123 01:08:20.624315 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:08:22.625144 containerd[1562]: time="2026-01-23T01:08:22.625094135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:22.761001 containerd[1562]: time="2026-01-23T01:08:22.760925876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:22.762235 containerd[1562]: time="2026-01-23T01:08:22.762204226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:22.762375 containerd[1562]: time="2026-01-23T01:08:22.762225056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:22.762695 kubelet[2713]: E0123 01:08:22.762462 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:22.763143 kubelet[2713]: E0123 01:08:22.762704 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:22.763143 kubelet[2713]: E0123 01:08:22.762816 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98pqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75bfc7c68c-flx8n_calico-apiserver(33350001-d074-4db1-9299-b1861aa3ad0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:22.764134 kubelet[2713]: E0123 01:08:22.764108 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:08:25.823808 kubelet[2713]: E0123 01:08:25.823413 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:29.623880 kubelet[2713]: E0123 01:08:29.623476 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:08:31.624477 kubelet[2713]: E0123 01:08:31.624408 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:08:31.625769 containerd[1562]: time="2026-01-23T01:08:31.625450729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:08:31.627060 kubelet[2713]: E0123 01:08:31.626603 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:08:31.763509 containerd[1562]: time="2026-01-23T01:08:31.763401357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:31.765550 containerd[1562]: time="2026-01-23T01:08:31.765401379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:08:31.766264 containerd[1562]: time="2026-01-23T01:08:31.765470500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:08:31.766407 kubelet[2713]: E0123 01:08:31.766319 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:31.766498 kubelet[2713]: E0123 01:08:31.766465 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:08:31.767468 kubelet[2713]: E0123 01:08:31.767397 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7c902b39aa0b49bfb339d3ac49963bbf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6cll4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5df4546f58-gg8t9_calico-system(002ca359-8c71-4c3b-83f5-6e16f458e48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:31.770905 containerd[1562]: time="2026-01-23T01:08:31.770854056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:08:31.916117 containerd[1562]: time="2026-01-23T01:08:31.915856084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:31.917543 containerd[1562]: time="2026-01-23T01:08:31.917343506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:08:31.917543 containerd[1562]: time="2026-01-23T01:08:31.917478497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:31.918209 kubelet[2713]: E0123 01:08:31.917747 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:31.918209 kubelet[2713]: E0123 01:08:31.917800 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:08:31.918209 kubelet[2713]: E0123 01:08:31.917889 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6cll4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5df4546f58-gg8t9_calico-system(002ca359-8c71-4c3b-83f5-6e16f458e48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:31.919329 kubelet[2713]: E0123 01:08:31.919301 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:08:32.623784 kubelet[2713]: E0123 01:08:32.623584 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:08:36.627104 kubelet[2713]: E0123 01:08:36.625603 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:08:37.622277 kubelet[2713]: E0123 01:08:37.621937 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:41.625627 containerd[1562]: time="2026-01-23T01:08:41.624778576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:08:41.756746 containerd[1562]: time="2026-01-23T01:08:41.756694809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:41.757780 containerd[1562]: time="2026-01-23T01:08:41.757748520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:08:41.758033 containerd[1562]: time="2026-01-23T01:08:41.757829740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:41.758251 kubelet[2713]: E0123 01:08:41.758207 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:41.758760 kubelet[2713]: E0123 01:08:41.758256 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:08:41.758760 kubelet[2713]: E0123 01:08:41.758385 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntfpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wlzdw_calico-system(ab709807-4327-49bc-a89a-808c81e848bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:41.759851 kubelet[2713]: E0123 01:08:41.759555 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:08:43.623538 containerd[1562]: time="2026-01-23T01:08:43.623491130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:08:43.625473 kubelet[2713]: E0123 01:08:43.624296 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:08:43.751668 containerd[1562]: time="2026-01-23T01:08:43.751622588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:43.752771 containerd[1562]: time="2026-01-23T01:08:43.752742839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:08:43.752879 containerd[1562]: time="2026-01-23T01:08:43.752808700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:08:43.753065 kubelet[2713]: E0123 01:08:43.753026 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:43.753158 kubelet[2713]: E0123 01:08:43.753073 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:08:43.753213 kubelet[2713]: E0123 01:08:43.753175 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkd7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:43.756113 containerd[1562]: time="2026-01-23T01:08:43.756090698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:08:43.887082 containerd[1562]: time="2026-01-23T01:08:43.886865621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:43.889210 containerd[1562]: time="2026-01-23T01:08:43.889147030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:08:43.889290 containerd[1562]: time="2026-01-23T01:08:43.889232792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:08:43.889501 kubelet[2713]: E0123 01:08:43.889463 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:43.889561 kubelet[2713]: E0123 01:08:43.889530 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:08:43.890578 kubelet[2713]: E0123 01:08:43.890533 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkd7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:43.891751 kubelet[2713]: E0123 01:08:43.891701 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:08:45.622858 containerd[1562]: time="2026-01-23T01:08:45.622817861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:08:45.991389 containerd[1562]: time="2026-01-23T01:08:45.991245629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:45.992384 containerd[1562]: time="2026-01-23T01:08:45.992346138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:08:45.992444 containerd[1562]: time="2026-01-23T01:08:45.992421598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:08:45.992628 kubelet[2713]: E0123 01:08:45.992567 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:45.993008 kubelet[2713]: E0123 01:08:45.992636 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:08:45.993461 kubelet[2713]: E0123 01:08:45.993415 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmkqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79578dbdbf-d2s9w_calico-system(1f20c944-f2fa-454c-8f1a-5b6a04bf7592): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:45.994604 kubelet[2713]: E0123 01:08:45.994566 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:08:47.624199 containerd[1562]: time="2026-01-23T01:08:47.624139344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:47.764064 containerd[1562]: time="2026-01-23T01:08:47.763997637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:47.765024 containerd[1562]: time="2026-01-23T01:08:47.764984195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:47.765083 containerd[1562]: time="2026-01-23T01:08:47.765057155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:47.765271 kubelet[2713]: E0123 01:08:47.765226 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:47.765637 kubelet[2713]: E0123 01:08:47.765297 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:47.765637 kubelet[2713]: E0123 01:08:47.765427 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktxtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75bfc7c68c-9b4d8_calico-apiserver(4157aec9-2f10-4912-b876-2bb1a760ce39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:47.767053 kubelet[2713]: E0123 01:08:47.767021 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:08:50.626011 containerd[1562]: time="2026-01-23T01:08:50.625752117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:08:50.759383 containerd[1562]: time="2026-01-23T01:08:50.759306104Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:08:50.760739 containerd[1562]: time="2026-01-23T01:08:50.760523284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:08:50.760739 containerd[1562]: time="2026-01-23T01:08:50.760645104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:08:50.761010 kubelet[2713]: E0123 01:08:50.760942 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:50.761867 kubelet[2713]: E0123 01:08:50.761035 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:08:50.761867 kubelet[2713]: E0123 01:08:50.761335 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98pqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75bfc7c68c-flx8n_calico-apiserver(33350001-d074-4db1-9299-b1861aa3ad0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:08:50.762727 kubelet[2713]: E0123 01:08:50.762672 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:08:54.629067 kubelet[2713]: E0123 01:08:54.628503 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:08:54.631018 kubelet[2713]: E0123 01:08:54.630304 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:08:55.625060 kubelet[2713]: E0123 01:08:55.623656 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:08:57.622664 kubelet[2713]: E0123 01:08:57.622631 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:58.624866 kubelet[2713]: E0123 01:08:58.624818 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:08:59.621867 kubelet[2713]: E0123 01:08:59.621815 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:08:59.622816 kubelet[2713]: E0123 01:08:59.622777 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:09:01.621910 kubelet[2713]: E0123 01:09:01.621570 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:09:02.624147 kubelet[2713]: E0123 01:09:02.623897 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:09:07.623732 kubelet[2713]: E0123 01:09:07.623681 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:09:08.622327 kubelet[2713]: E0123 01:09:08.622271 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:09:09.623602 kubelet[2713]: E0123 01:09:09.623557 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:09:09.625440 kubelet[2713]: E0123 01:09:09.625407 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:09:11.623347 kubelet[2713]: E0123 01:09:11.623303 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:09:13.623513 kubelet[2713]: E0123 01:09:13.623373 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:09:14.627395 kubelet[2713]: E0123 01:09:14.627103 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:09:18.529915 systemd[1]: Started sshd@7-172.239.48.230:22-68.220.241.50:37140.service - OpenSSH per-connection server daemon (68.220.241.50:37140). Jan 23 01:09:18.704003 sshd[4963]: Accepted publickey for core from 68.220.241.50 port 37140 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:18.706159 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:18.714179 systemd-logind[1531]: New session 8 of user core. Jan 23 01:09:18.719091 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:09:18.911010 sshd[4966]: Connection closed by 68.220.241.50 port 37140 Jan 23 01:09:18.912030 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:18.918532 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:09:18.919263 systemd[1]: sshd@7-172.239.48.230:22-68.220.241.50:37140.service: Deactivated successfully. Jan 23 01:09:18.922103 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:09:18.926376 systemd-logind[1531]: Removed session 8. Jan 23 01:09:21.622804 containerd[1562]: time="2026-01-23T01:09:21.622571732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:09:21.795799 containerd[1562]: time="2026-01-23T01:09:21.795743042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:21.796730 containerd[1562]: time="2026-01-23T01:09:21.796690536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:09:21.796782 containerd[1562]: time="2026-01-23T01:09:21.796768997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:09:21.797197 kubelet[2713]: E0123 01:09:21.797142 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:09:21.798232 kubelet[2713]: E0123 01:09:21.797333 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:09:21.798587 kubelet[2713]: E0123 01:09:21.798302 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7c902b39aa0b49bfb339d3ac49963bbf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6cll4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5df4546f58-gg8t9_calico-system(002ca359-8c71-4c3b-83f5-6e16f458e48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:21.800816 containerd[1562]: time="2026-01-23T01:09:21.800594065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:09:21.926741 containerd[1562]: time="2026-01-23T01:09:21.926152772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:21.928242 containerd[1562]: time="2026-01-23T01:09:21.928155011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:09:21.928242 containerd[1562]: time="2026-01-23T01:09:21.928191261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:09:21.928927 kubelet[2713]: E0123 01:09:21.928430 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:09:21.928927 kubelet[2713]: E0123 01:09:21.928483 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:09:21.928927 kubelet[2713]: E0123 01:09:21.928580 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6cll4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5df4546f58-gg8t9_calico-system(002ca359-8c71-4c3b-83f5-6e16f458e48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:21.930040 kubelet[2713]: E0123 01:09:21.929947 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:09:22.625996 kubelet[2713]: E0123 01:09:22.625449 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:09:22.626176 containerd[1562]: time="2026-01-23T01:09:22.625640038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:09:22.626465 kubelet[2713]: E0123 01:09:22.626434 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:09:22.783466 containerd[1562]: time="2026-01-23T01:09:22.783416367Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:22.784149 containerd[1562]: time="2026-01-23T01:09:22.784115960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:09:22.784221 containerd[1562]: time="2026-01-23T01:09:22.784199281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:09:22.784399 kubelet[2713]: E0123 01:09:22.784358 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:09:22.784466 kubelet[2713]: E0123 01:09:22.784411 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:09:22.784577 kubelet[2713]: E0123 01:09:22.784517 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntfpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wlzdw_calico-system(ab709807-4327-49bc-a89a-808c81e848bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:22.785949 kubelet[2713]: E0123 01:09:22.785902 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:09:23.943197 systemd[1]: Started sshd@8-172.239.48.230:22-68.220.241.50:41704.service - OpenSSH per-connection server daemon (68.220.241.50:41704). Jan 23 01:09:24.107608 sshd[4978]: Accepted publickey for core from 68.220.241.50 port 41704 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:24.110288 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:24.117411 systemd-logind[1531]: New session 9 of user core. Jan 23 01:09:24.126363 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:09:24.309834 sshd[4981]: Connection closed by 68.220.241.50 port 41704 Jan 23 01:09:24.311786 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:24.317476 systemd[1]: sshd@8-172.239.48.230:22-68.220.241.50:41704.service: Deactivated successfully. Jan 23 01:09:24.320088 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:09:24.320531 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:09:24.325226 systemd-logind[1531]: Removed session 9. Jan 23 01:09:25.623190 kubelet[2713]: E0123 01:09:25.622725 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:09:26.625540 kubelet[2713]: E0123 01:09:26.624966 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:09:29.350197 systemd[1]: Started sshd@9-172.239.48.230:22-68.220.241.50:41708.service - OpenSSH per-connection server daemon (68.220.241.50:41708). Jan 23 01:09:29.527734 sshd[5026]: Accepted publickey for core from 68.220.241.50 port 41708 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:29.529204 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:29.533947 systemd-logind[1531]: New session 10 of user core. Jan 23 01:09:29.543128 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:09:29.622401 kubelet[2713]: E0123 01:09:29.622063 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:09:29.726026 sshd[5032]: Connection closed by 68.220.241.50 port 41708 Jan 23 01:09:29.727153 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:29.731215 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:09:29.732102 systemd[1]: sshd@9-172.239.48.230:22-68.220.241.50:41708.service: Deactivated successfully. Jan 23 01:09:29.734139 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:09:29.735847 systemd-logind[1531]: Removed session 10. Jan 23 01:09:29.757704 systemd[1]: Started sshd@10-172.239.48.230:22-68.220.241.50:41712.service - OpenSSH per-connection server daemon (68.220.241.50:41712). Jan 23 01:09:29.928813 sshd[5046]: Accepted publickey for core from 68.220.241.50 port 41712 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:29.932766 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:29.943822 systemd-logind[1531]: New session 11 of user core. Jan 23 01:09:29.950229 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:09:30.164000 sshd[5049]: Connection closed by 68.220.241.50 port 41712 Jan 23 01:09:30.164508 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:30.170664 systemd[1]: sshd@10-172.239.48.230:22-68.220.241.50:41712.service: Deactivated successfully. Jan 23 01:09:30.172845 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:09:30.173905 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:09:30.182351 systemd-logind[1531]: Removed session 11. Jan 23 01:09:30.206600 systemd[1]: Started sshd@11-172.239.48.230:22-68.220.241.50:41726.service - OpenSSH per-connection server daemon (68.220.241.50:41726). Jan 23 01:09:30.417107 sshd[5059]: Accepted publickey for core from 68.220.241.50 port 41726 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:30.418634 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:30.426328 systemd-logind[1531]: New session 12 of user core. Jan 23 01:09:30.436288 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:09:30.667017 sshd[5062]: Connection closed by 68.220.241.50 port 41726 Jan 23 01:09:30.667579 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:30.673319 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:09:30.677394 systemd[1]: sshd@11-172.239.48.230:22-68.220.241.50:41726.service: Deactivated successfully. Jan 23 01:09:30.681933 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:09:30.685952 systemd-logind[1531]: Removed session 12. Jan 23 01:09:33.623716 kubelet[2713]: E0123 01:09:33.623661 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:09:35.622950 kubelet[2713]: E0123 01:09:35.622720 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:09:35.624348 kubelet[2713]: E0123 01:09:35.624299 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:09:35.703455 systemd[1]: Started sshd@12-172.239.48.230:22-68.220.241.50:48226.service - OpenSSH per-connection server daemon (68.220.241.50:48226). Jan 23 01:09:35.870546 sshd[5077]: Accepted publickey for core from 68.220.241.50 port 48226 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:35.872063 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:35.878324 systemd-logind[1531]: New session 13 of user core. Jan 23 01:09:35.884287 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:09:36.067802 sshd[5080]: Connection closed by 68.220.241.50 port 48226 Jan 23 01:09:36.069723 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:36.075581 systemd[1]: sshd@12-172.239.48.230:22-68.220.241.50:48226.service: Deactivated successfully. Jan 23 01:09:36.081085 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:09:36.083175 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:09:36.086248 systemd-logind[1531]: Removed session 13. Jan 23 01:09:36.100524 systemd[1]: Started sshd@13-172.239.48.230:22-68.220.241.50:48232.service - OpenSSH per-connection server daemon (68.220.241.50:48232). Jan 23 01:09:36.262557 sshd[5092]: Accepted publickey for core from 68.220.241.50 port 48232 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:36.263521 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:36.269134 systemd-logind[1531]: New session 14 of user core. Jan 23 01:09:36.276167 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:09:36.577659 sshd[5095]: Connection closed by 68.220.241.50 port 48232 Jan 23 01:09:36.578319 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:36.585956 systemd[1]: sshd@13-172.239.48.230:22-68.220.241.50:48232.service: Deactivated successfully. Jan 23 01:09:36.590558 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:09:36.592197 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:09:36.594711 systemd-logind[1531]: Removed session 14. Jan 23 01:09:36.617135 systemd[1]: Started sshd@14-172.239.48.230:22-68.220.241.50:48238.service - OpenSSH per-connection server daemon (68.220.241.50:48238). Jan 23 01:09:36.625132 containerd[1562]: time="2026-01-23T01:09:36.625062887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:09:36.761994 containerd[1562]: time="2026-01-23T01:09:36.761111561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:36.762882 containerd[1562]: time="2026-01-23T01:09:36.762845817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:09:36.763013 containerd[1562]: time="2026-01-23T01:09:36.762901217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:09:36.763265 kubelet[2713]: E0123 01:09:36.763234 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:36.763848 kubelet[2713]: E0123 01:09:36.763274 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:36.763848 kubelet[2713]: E0123 01:09:36.763371 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktxtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75bfc7c68c-9b4d8_calico-apiserver(4157aec9-2f10-4912-b876-2bb1a760ce39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:36.765002 kubelet[2713]: E0123 01:09:36.764965 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:09:36.792841 sshd[5105]: Accepted publickey for core from 68.220.241.50 port 48238 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:36.793515 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:36.803895 systemd-logind[1531]: New session 15 of user core. Jan 23 01:09:36.808089 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:09:37.599597 sshd[5108]: Connection closed by 68.220.241.50 port 48238 Jan 23 01:09:37.601219 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:37.609435 systemd[1]: sshd@14-172.239.48.230:22-68.220.241.50:48238.service: Deactivated successfully. Jan 23 01:09:37.614896 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:09:37.617928 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:09:37.625227 containerd[1562]: time="2026-01-23T01:09:37.624705321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:09:37.644232 systemd[1]: Started sshd@15-172.239.48.230:22-68.220.241.50:48248.service - OpenSSH per-connection server daemon (68.220.241.50:48248). Jan 23 01:09:37.646680 systemd-logind[1531]: Removed session 15. Jan 23 01:09:37.765285 containerd[1562]: time="2026-01-23T01:09:37.765145166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:37.766216 containerd[1562]: time="2026-01-23T01:09:37.766106130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:09:37.766216 containerd[1562]: time="2026-01-23T01:09:37.766153560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:09:37.771072 kubelet[2713]: E0123 01:09:37.771026 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:09:37.771589 kubelet[2713]: E0123 01:09:37.771523 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:09:37.772923 kubelet[2713]: E0123 01:09:37.772831 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lmkqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79578dbdbf-d2s9w_calico-system(1f20c944-f2fa-454c-8f1a-5b6a04bf7592): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:37.774671 kubelet[2713]: E0123 01:09:37.774428 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:09:37.775722 containerd[1562]: time="2026-01-23T01:09:37.775532797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:09:37.814657 sshd[5125]: Accepted publickey for core from 68.220.241.50 port 48248 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:37.817579 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:37.826346 systemd-logind[1531]: New session 16 of user core. Jan 23 01:09:37.834113 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:09:37.910591 containerd[1562]: time="2026-01-23T01:09:37.910311362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:37.911869 containerd[1562]: time="2026-01-23T01:09:37.911771548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:09:37.911869 containerd[1562]: time="2026-01-23T01:09:37.911844538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:09:37.912764 kubelet[2713]: E0123 01:09:37.912305 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:09:37.912764 kubelet[2713]: E0123 01:09:37.912351 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:09:37.912764 kubelet[2713]: E0123 01:09:37.912559 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkd7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:37.913254 containerd[1562]: time="2026-01-23T01:09:37.913064433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:09:38.049378 containerd[1562]: time="2026-01-23T01:09:38.049314332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:38.051081 containerd[1562]: time="2026-01-23T01:09:38.051011228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:09:38.051191 containerd[1562]: time="2026-01-23T01:09:38.051144718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:09:38.052377 kubelet[2713]: E0123 01:09:38.052296 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:38.052494 kubelet[2713]: E0123 01:09:38.052393 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:09:38.052945 kubelet[2713]: E0123 01:09:38.052854 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98pqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75bfc7c68c-flx8n_calico-apiserver(33350001-d074-4db1-9299-b1861aa3ad0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:38.055685 containerd[1562]: time="2026-01-23T01:09:38.055390065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:09:38.057132 kubelet[2713]: E0123 01:09:38.056111 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:09:38.162287 sshd[5128]: Connection closed by 68.220.241.50 port 48248 Jan 23 01:09:38.162223 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:38.167907 systemd[1]: sshd@15-172.239.48.230:22-68.220.241.50:48248.service: Deactivated successfully. Jan 23 01:09:38.170316 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:09:38.172738 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:09:38.175445 systemd-logind[1531]: Removed session 16. Jan 23 01:09:38.191880 systemd[1]: Started sshd@16-172.239.48.230:22-68.220.241.50:48252.service - OpenSSH per-connection server daemon (68.220.241.50:48252). Jan 23 01:09:38.196838 containerd[1562]: time="2026-01-23T01:09:38.196801481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:09:38.197555 containerd[1562]: time="2026-01-23T01:09:38.197500513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:09:38.197685 containerd[1562]: time="2026-01-23T01:09:38.197580803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:09:38.197778 kubelet[2713]: E0123 01:09:38.197741 2713 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:09:38.197829 kubelet[2713]: E0123 01:09:38.197802 2713 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:09:38.198138 kubelet[2713]: E0123 01:09:38.198097 2713 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkd7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-znvnr_calico-system(fb0b4136-7548-44aa-9706-52799d45da0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:09:38.199358 kubelet[2713]: E0123 01:09:38.199295 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:09:38.363936 sshd[5138]: Accepted publickey for core from 68.220.241.50 port 48252 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:38.362164 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:38.368850 systemd-logind[1531]: New session 17 of user core. Jan 23 01:09:38.377043 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:09:38.565461 sshd[5141]: Connection closed by 68.220.241.50 port 48252 Jan 23 01:09:38.566430 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:38.573199 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:09:38.574307 systemd[1]: sshd@16-172.239.48.230:22-68.220.241.50:48252.service: Deactivated successfully. Jan 23 01:09:38.577515 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:09:38.583135 systemd-logind[1531]: Removed session 17. Jan 23 01:09:40.622544 kubelet[2713]: E0123 01:09:40.622502 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:09:43.604180 systemd[1]: Started sshd@17-172.239.48.230:22-68.220.241.50:48310.service - OpenSSH per-connection server daemon (68.220.241.50:48310). Jan 23 01:09:43.622170 kubelet[2713]: E0123 01:09:43.622143 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:09:43.786393 sshd[5175]: Accepted publickey for core from 68.220.241.50 port 48310 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:43.789095 sshd-session[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:43.796177 systemd-logind[1531]: New session 18 of user core. Jan 23 01:09:43.804067 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:09:43.983641 sshd[5178]: Connection closed by 68.220.241.50 port 48310 Jan 23 01:09:43.984248 sshd-session[5175]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:43.989009 systemd[1]: sshd@17-172.239.48.230:22-68.220.241.50:48310.service: Deactivated successfully. Jan 23 01:09:43.992934 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:09:43.994349 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:09:43.996233 systemd-logind[1531]: Removed session 18. Jan 23 01:09:45.624596 kubelet[2713]: E0123 01:09:45.624103 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:09:48.629239 kubelet[2713]: E0123 01:09:48.629180 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:09:48.630597 kubelet[2713]: E0123 01:09:48.630552 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:09:49.026115 systemd[1]: Started sshd@18-172.239.48.230:22-68.220.241.50:48312.service - OpenSSH per-connection server daemon (68.220.241.50:48312). Jan 23 01:09:49.195648 sshd[5190]: Accepted publickey for core from 68.220.241.50 port 48312 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:49.197727 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:49.203279 systemd-logind[1531]: New session 19 of user core. Jan 23 01:09:49.207125 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:09:49.382218 sshd[5193]: Connection closed by 68.220.241.50 port 48312 Jan 23 01:09:49.383183 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:49.387472 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:09:49.388584 systemd[1]: sshd@18-172.239.48.230:22-68.220.241.50:48312.service: Deactivated successfully. Jan 23 01:09:49.390688 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:09:49.392794 systemd-logind[1531]: Removed session 19. Jan 23 01:09:49.624157 kubelet[2713]: E0123 01:09:49.624124 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:09:49.624666 kubelet[2713]: E0123 01:09:49.624423 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:09:50.624026 kubelet[2713]: E0123 01:09:50.622775 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:09:54.417064 systemd[1]: Started sshd@19-172.239.48.230:22-68.220.241.50:37862.service - OpenSSH per-connection server daemon (68.220.241.50:37862). Jan 23 01:09:54.581583 sshd[5205]: Accepted publickey for core from 68.220.241.50 port 37862 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:09:54.582630 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:09:54.589124 systemd-logind[1531]: New session 20 of user core. Jan 23 01:09:54.594446 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:09:54.794361 sshd[5208]: Connection closed by 68.220.241.50 port 37862 Jan 23 01:09:54.795168 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Jan 23 01:09:54.801484 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:09:54.803428 systemd[1]: sshd@19-172.239.48.230:22-68.220.241.50:37862.service: Deactivated successfully. Jan 23 01:09:54.806733 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:09:54.813073 systemd-logind[1531]: Removed session 20. Jan 23 01:09:56.626565 kubelet[2713]: E0123 01:09:56.626529 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:09:59.623521 kubelet[2713]: E0123 01:09:59.622846 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:09:59.829565 systemd[1]: Started sshd@20-172.239.48.230:22-68.220.241.50:37864.service - OpenSSH per-connection server daemon (68.220.241.50:37864). Jan 23 01:09:59.998514 sshd[5247]: Accepted publickey for core from 68.220.241.50 port 37864 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:10:00.000571 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:00.006286 systemd-logind[1531]: New session 21 of user core. Jan 23 01:10:00.011143 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:10:00.199969 sshd[5250]: Connection closed by 68.220.241.50 port 37864 Jan 23 01:10:00.200275 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:00.207780 systemd[1]: sshd@20-172.239.48.230:22-68.220.241.50:37864.service: Deactivated successfully. Jan 23 01:10:00.210042 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:10:00.214819 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:10:00.217155 systemd-logind[1531]: Removed session 21. Jan 23 01:10:00.627735 kubelet[2713]: E0123 01:10:00.627653 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-9b4d8" podUID="4157aec9-2f10-4912-b876-2bb1a760ce39" Jan 23 01:10:02.625397 kubelet[2713]: E0123 01:10:02.625356 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-znvnr" podUID="fb0b4136-7548-44aa-9706-52799d45da0f" Jan 23 01:10:03.623218 kubelet[2713]: E0123 01:10:03.623020 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75bfc7c68c-flx8n" podUID="33350001-d074-4db1-9299-b1861aa3ad0b" Jan 23 01:10:04.623939 kubelet[2713]: E0123 01:10:04.623695 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79578dbdbf-d2s9w" podUID="1f20c944-f2fa-454c-8f1a-5b6a04bf7592" Jan 23 01:10:05.229765 systemd[1]: Started sshd@21-172.239.48.230:22-68.220.241.50:46728.service - OpenSSH per-connection server daemon (68.220.241.50:46728). Jan 23 01:10:05.387023 sshd[5264]: Accepted publickey for core from 68.220.241.50 port 46728 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:10:05.388196 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:05.393037 systemd-logind[1531]: New session 22 of user core. Jan 23 01:10:05.398136 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:10:05.569502 sshd[5267]: Connection closed by 68.220.241.50 port 46728 Jan 23 01:10:05.570143 sshd-session[5264]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:05.575426 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:10:05.576166 systemd[1]: sshd@21-172.239.48.230:22-68.220.241.50:46728.service: Deactivated successfully. Jan 23 01:10:05.578886 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:10:05.583878 systemd-logind[1531]: Removed session 22. Jan 23 01:10:08.622891 kubelet[2713]: E0123 01:10:08.622089 2713 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 01:10:09.622638 kubelet[2713]: E0123 01:10:09.622583 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wlzdw" podUID="ab709807-4327-49bc-a89a-808c81e848bf" Jan 23 01:10:10.606312 systemd[1]: Started sshd@22-172.239.48.230:22-68.220.241.50:46730.service - OpenSSH per-connection server daemon (68.220.241.50:46730). Jan 23 01:10:10.632304 kubelet[2713]: E0123 01:10:10.632262 2713 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5df4546f58-gg8t9" podUID="002ca359-8c71-4c3b-83f5-6e16f458e48e" Jan 23 01:10:10.793233 sshd[5279]: Accepted publickey for core from 68.220.241.50 port 46730 ssh2: RSA SHA256:fbFqJMgpJnzygE4gdkx3sxvHZsD4H3wAmDidBrheqsc Jan 23 01:10:10.793053 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:10:10.803036 systemd-logind[1531]: New session 23 of user core. Jan 23 01:10:10.806451 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:10:11.019165 sshd[5282]: Connection closed by 68.220.241.50 port 46730 Jan 23 01:10:11.021528 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Jan 23 01:10:11.026605 systemd[1]: sshd@22-172.239.48.230:22-68.220.241.50:46730.service: Deactivated successfully. Jan 23 01:10:11.027450 systemd-logind[1531]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:10:11.032583 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:10:11.036594 systemd-logind[1531]: Removed session 23.