Jan 23 18:57:55.158370 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:57:55.158419 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:57:55.158428 kernel: BIOS-provided physical RAM map: Jan 23 18:57:55.158435 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 23 18:57:55.158441 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 23 18:57:55.158448 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 18:57:55.158459 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 23 18:57:55.158465 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 23 18:57:55.158476 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 18:57:55.158483 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 18:57:55.158490 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:57:55.158496 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 18:57:55.158503 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 18:57:55.158510 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:57:55.158521 kernel: NX (Execute Disable) protection: active Jan 23 18:57:55.158528 kernel: APIC: Static calls initialized Jan 23 18:57:55.158538 kernel: SMBIOS 2.8 present. Jan 23 18:57:55.158546 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 23 18:57:55.158553 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:57:55.158560 kernel: Hypervisor detected: KVM Jan 23 18:57:55.158570 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 18:57:55.158577 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:57:55.158584 kernel: kvm-clock: using sched offset of 12413542356 cycles Jan 23 18:57:55.158592 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:57:55.158599 kernel: tsc: Detected 1999.999 MHz processor Jan 23 18:57:55.158607 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:57:55.158614 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:57:55.158622 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 23 18:57:55.158630 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 18:57:55.158637 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:57:55.158647 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 18:57:55.158654 kernel: Using GB pages for direct mapping Jan 23 18:57:55.158662 kernel: ACPI: Early table checksum verification disabled Jan 23 18:57:55.158669 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 23 18:57:55.158676 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:57:55.158683 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:57:55.158691 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:57:55.158698 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 23 18:57:55.158705 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:57:55.158716 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:57:55.158727 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:57:55.158734 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:57:55.158742 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 23 18:57:55.158750 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 23 18:57:55.158760 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 23 18:57:55.158768 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 23 18:57:55.158775 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 23 18:57:55.158783 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 23 18:57:55.158791 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 23 18:57:55.158798 kernel: No NUMA configuration found Jan 23 18:57:55.158806 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 18:57:55.158814 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Jan 23 18:57:55.158821 kernel: Zone ranges: Jan 23 18:57:55.158832 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:57:55.158839 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:57:55.158847 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:57:55.158854 kernel: Device empty Jan 23 18:57:55.158862 kernel: Movable zone start for each node Jan 23 18:57:55.158870 kernel: Early memory node ranges Jan 23 18:57:55.158877 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 18:57:55.158889 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 23 18:57:55.158896 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:57:55.158910 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 18:57:55.158918 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:57:55.158925 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 18:57:55.158933 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 23 18:57:55.158944 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:57:55.158952 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:57:55.158959 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:57:55.158967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:57:55.158988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:57:55.159000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:57:55.159007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:57:55.159015 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:57:55.159022 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:57:55.159030 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:57:55.159037 kernel: TSC deadline timer available Jan 23 18:57:55.159045 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:57:55.159052 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:57:55.159060 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:57:55.159070 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:57:55.159078 kernel: CPU topo: Num. cores per package: 2 Jan 23 18:57:55.159085 kernel: CPU topo: Num. threads per package: 2 Jan 23 18:57:55.159093 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:57:55.159101 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:57:55.159108 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:57:55.159115 kernel: kvm-guest: setup PV sched yield Jan 23 18:57:55.159123 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 18:57:55.159131 kernel: Booting paravirtualized kernel on KVM Jan 23 18:57:55.159138 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:57:55.159149 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:57:55.159156 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:57:55.159164 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:57:55.159171 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:57:55.159179 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:57:55.159186 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:57:55.159195 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:57:55.159203 kernel: random: crng init done Jan 23 18:57:55.159213 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:57:55.159221 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:57:55.159229 kernel: Fallback order for Node 0: 0 Jan 23 18:57:55.159237 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 23 18:57:55.159245 kernel: Policy zone: Normal Jan 23 18:57:55.159252 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:57:55.159260 kernel: software IO TLB: area num 2. Jan 23 18:57:55.159268 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:57:55.159275 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:57:55.159285 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:57:55.159293 kernel: Dynamic Preempt: voluntary Jan 23 18:57:55.159304 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:57:55.159313 kernel: rcu: RCU event tracing is enabled. Jan 23 18:57:55.159321 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:57:55.159329 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:57:55.159337 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:57:55.159344 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:57:55.159351 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:57:55.159361 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:57:55.159369 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:57:55.159385 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:57:55.159396 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:57:55.159404 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 18:57:55.159411 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:57:55.159419 kernel: Console: colour VGA+ 80x25 Jan 23 18:57:55.159427 kernel: printk: legacy console [tty0] enabled Jan 23 18:57:55.159438 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:57:55.159446 kernel: ACPI: Core revision 20240827 Jan 23 18:57:55.159457 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:57:55.159464 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:57:55.159472 kernel: x2apic enabled Jan 23 18:57:55.159480 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:57:55.159488 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:57:55.159496 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:57:55.159506 kernel: kvm-guest: setup PV IPIs Jan 23 18:57:55.159514 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:57:55.159522 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 23 18:57:55.159530 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Jan 23 18:57:55.159538 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:57:55.159553 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:57:55.159561 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:57:55.159569 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:57:55.159577 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:57:55.159587 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:57:55.159595 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 18:57:55.159603 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 18:57:55.159611 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 18:57:55.159619 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 18:57:55.159627 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 18:57:55.159635 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:57:55.159642 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 18:57:55.159653 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:57:55.159661 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:57:55.159668 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:57:55.159676 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:57:55.159684 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:57:55.159692 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 18:57:55.159699 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:57:55.159711 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 23 18:57:55.159719 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 23 18:57:55.159729 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:57:55.159737 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:57:55.159745 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:57:55.159753 kernel: landlock: Up and running. Jan 23 18:57:55.159760 kernel: SELinux: Initializing. Jan 23 18:57:55.159768 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:57:55.159776 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:57:55.159784 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 18:57:55.159792 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 23 18:57:55.159802 kernel: ... version: 0 Jan 23 18:57:55.159810 kernel: ... bit width: 48 Jan 23 18:57:55.159818 kernel: ... generic registers: 6 Jan 23 18:57:55.159825 kernel: ... value mask: 0000ffffffffffff Jan 23 18:57:55.159833 kernel: ... max period: 00007fffffffffff Jan 23 18:57:55.159840 kernel: ... fixed-purpose events: 0 Jan 23 18:57:55.159848 kernel: ... event mask: 000000000000003f Jan 23 18:57:55.159856 kernel: signal: max sigframe size: 3376 Jan 23 18:57:55.159863 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:57:55.159871 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:57:55.159882 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:57:55.159890 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:57:55.159897 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:57:55.159905 kernel: .... node #0, CPUs: #1 Jan 23 18:57:55.159912 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:57:55.159924 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Jan 23 18:57:55.159932 kernel: Memory: 3953616K/4193772K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 235480K reserved, 0K cma-reserved) Jan 23 18:57:55.159939 kernel: devtmpfs: initialized Jan 23 18:57:55.159947 kernel: x86/mm: Memory block size: 128MB Jan 23 18:57:55.159958 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:57:55.159965 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:57:55.161995 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:57:55.162008 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:57:55.162017 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:57:55.162024 kernel: audit: type=2000 audit(1769194670.478:1): state=initialized audit_enabled=0 res=1 Jan 23 18:57:55.162032 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:57:55.162039 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:57:55.162051 kernel: cpuidle: using governor menu Jan 23 18:57:55.162058 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:57:55.162066 kernel: dca service started, version 1.12.1 Jan 23 18:57:55.162073 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 18:57:55.162081 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 18:57:55.162088 kernel: PCI: Using configuration type 1 for base access Jan 23 18:57:55.162096 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:57:55.162103 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:57:55.162115 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:57:55.162126 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:57:55.162133 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:57:55.162141 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:57:55.162148 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:57:55.162155 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:57:55.162163 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:57:55.162170 kernel: ACPI: Interpreter enabled Jan 23 18:57:55.162177 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:57:55.162185 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:57:55.162195 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:57:55.162202 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:57:55.162210 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:57:55.162217 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:57:55.162579 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:57:55.162742 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:57:55.162891 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:57:55.162902 kernel: PCI host bridge to bus 0000:00 Jan 23 18:57:55.163113 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:57:55.163255 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:57:55.163391 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:57:55.163524 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 18:57:55.163658 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 18:57:55.163791 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 23 18:57:55.163932 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:57:55.166166 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:57:55.166373 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:57:55.166634 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 18:57:55.166782 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 18:57:55.166927 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 18:57:55.167096 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:57:55.167282 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 23 18:57:55.167433 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 23 18:57:55.167579 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 18:57:55.167730 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 18:57:55.167914 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 18:57:55.170109 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 23 18:57:55.170262 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 18:57:55.170415 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 18:57:55.170562 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 18:57:55.170731 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:57:55.170880 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:57:55.173110 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:57:55.173269 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 23 18:57:55.173423 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 23 18:57:55.173591 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:57:55.173737 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 18:57:55.173754 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:57:55.173762 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:57:55.173770 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:57:55.173778 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:57:55.173785 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:57:55.173798 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:57:55.173806 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:57:55.173813 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:57:55.173821 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:57:55.173829 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:57:55.173837 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:57:55.173844 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:57:55.173852 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:57:55.173860 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:57:55.173870 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:57:55.173878 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:57:55.173886 kernel: iommu: Default domain type: Translated Jan 23 18:57:55.173894 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:57:55.173902 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:57:55.173909 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:57:55.173917 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 23 18:57:55.173925 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 23 18:57:55.174095 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:57:55.174250 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:57:55.174394 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:57:55.174404 kernel: vgaarb: loaded Jan 23 18:57:55.174412 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:57:55.174420 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:57:55.174428 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:57:55.174436 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:57:55.174444 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:57:55.174455 kernel: pnp: PnP ACPI init Jan 23 18:57:55.174676 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 18:57:55.174689 kernel: pnp: PnP ACPI: found 5 devices Jan 23 18:57:55.174697 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:57:55.174705 kernel: NET: Registered PF_INET protocol family Jan 23 18:57:55.174713 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:57:55.174721 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:57:55.174729 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:57:55.174742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:57:55.174749 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:57:55.174757 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:57:55.174765 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:57:55.174773 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:57:55.174781 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:57:55.174788 kernel: NET: Registered PF_XDP protocol family Jan 23 18:57:55.174933 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:57:55.178826 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:57:55.178997 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:57:55.179141 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 18:57:55.179277 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 18:57:55.179411 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 23 18:57:55.179421 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:57:55.179429 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:57:55.179437 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 23 18:57:55.179445 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 23 18:57:55.179457 kernel: Initialise system trusted keyrings Jan 23 18:57:55.179465 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:57:55.179473 kernel: Key type asymmetric registered Jan 23 18:57:55.179481 kernel: Asymmetric key parser 'x509' registered Jan 23 18:57:55.179488 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:57:55.179496 kernel: io scheduler mq-deadline registered Jan 23 18:57:55.179504 kernel: io scheduler kyber registered Jan 23 18:57:55.179511 kernel: io scheduler bfq registered Jan 23 18:57:55.179519 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:57:55.179527 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:57:55.179538 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:57:55.179545 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:57:55.179553 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:57:55.179561 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:57:55.179569 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:57:55.179576 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:57:55.179793 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 18:57:55.179806 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:57:55.179955 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 18:57:55.180120 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T18:57:54 UTC (1769194674) Jan 23 18:57:55.180261 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 18:57:55.180271 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:57:55.180279 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:57:55.180287 kernel: Segment Routing with IPv6 Jan 23 18:57:55.180294 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:57:55.180302 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:57:55.180314 kernel: Key type dns_resolver registered Jan 23 18:57:55.180321 kernel: IPI shorthand broadcast: enabled Jan 23 18:57:55.180329 kernel: sched_clock: Marking stable (6162002941, 354294393)->(6769851323, -253553989) Jan 23 18:57:55.180337 kernel: registered taskstats version 1 Jan 23 18:57:55.180344 kernel: Loading compiled-in X.509 certificates Jan 23 18:57:55.180352 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:57:55.180359 kernel: Demotion targets for Node 0: null Jan 23 18:57:55.180367 kernel: Key type .fscrypt registered Jan 23 18:57:55.180374 kernel: Key type fscrypt-provisioning registered Jan 23 18:57:55.180385 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:57:55.180392 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:57:55.180400 kernel: ima: No architecture policies found Jan 23 18:57:55.180407 kernel: clk: Disabling unused clocks Jan 23 18:57:55.180415 kernel: Warning: unable to open an initial console. Jan 23 18:57:55.180422 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:57:55.180430 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:57:55.180438 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:57:55.180445 kernel: Run /init as init process Jan 23 18:57:55.180456 kernel: with arguments: Jan 23 18:57:55.180463 kernel: /init Jan 23 18:57:55.180471 kernel: with environment: Jan 23 18:57:55.180497 kernel: HOME=/ Jan 23 18:57:55.180507 kernel: TERM=linux Jan 23 18:57:55.180523 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:57:55.180534 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:57:55.180543 systemd[1]: Detected virtualization kvm. Jan 23 18:57:55.180554 systemd[1]: Detected architecture x86-64. Jan 23 18:57:55.180562 systemd[1]: Running in initrd. Jan 23 18:57:55.180570 systemd[1]: No hostname configured, using default hostname. Jan 23 18:57:55.180579 systemd[1]: Hostname set to . Jan 23 18:57:55.180587 systemd[1]: Initializing machine ID from random generator. Jan 23 18:57:55.180595 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:57:55.180604 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:57:55.180612 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:57:55.180624 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:57:55.180632 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:57:55.180640 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:57:55.180649 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:57:55.180659 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:57:55.180667 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:57:55.180678 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:57:55.180687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:57:55.180696 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:57:55.180704 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:57:55.180712 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:57:55.180721 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:57:55.180729 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:57:55.180737 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:57:55.180746 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:57:55.180757 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:57:55.180765 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:57:55.180773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:57:55.180785 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:57:55.180793 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:57:55.180804 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:57:55.180812 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:57:55.180821 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:57:55.180829 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:57:55.180838 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:57:55.180846 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:57:55.180854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:57:55.180863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:55.180871 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:57:55.180883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:57:55.180891 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:57:55.180937 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 18:57:55.180961 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:57:55.180970 systemd-journald[187]: Journal started Jan 23 18:57:55.181016 systemd-journald[187]: Runtime Journal (/run/log/journal/658004741d47476b80fdac1b51081a02) is 8M, max 78.2M, 70.2M free. Jan 23 18:57:55.113031 systemd-modules-load[188]: Inserted module 'overlay' Jan 23 18:57:55.280474 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:57:55.280505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:57:55.280520 kernel: Bridge firewalling registered Jan 23 18:57:55.193867 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 23 18:57:55.280010 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:57:55.281759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:55.283487 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:57:55.287131 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:57:55.290316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:57:55.294101 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:57:55.325120 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:57:55.337051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:57:55.340152 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:57:55.348123 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:57:55.355130 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:57:55.356470 systemd-tmpfiles[204]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:57:55.365110 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:57:55.369120 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:57:55.381110 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:57:55.419847 systemd-resolved[227]: Positive Trust Anchors: Jan 23 18:57:55.421019 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:57:55.421054 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:57:55.428689 systemd-resolved[227]: Defaulting to hostname 'linux'. Jan 23 18:57:55.432029 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:57:55.432783 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:57:55.487020 kernel: SCSI subsystem initialized Jan 23 18:57:55.497004 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:57:55.509010 kernel: iscsi: registered transport (tcp) Jan 23 18:57:55.530174 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:57:55.530210 kernel: QLogic iSCSI HBA Driver Jan 23 18:57:55.553812 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:57:55.576111 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:57:55.579907 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:57:55.646889 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:57:55.649235 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:57:55.705007 kernel: raid6: avx2x4 gen() 30098 MB/s Jan 23 18:57:55.723005 kernel: raid6: avx2x2 gen() 31089 MB/s Jan 23 18:57:55.741362 kernel: raid6: avx2x1 gen() 19476 MB/s Jan 23 18:57:55.741384 kernel: raid6: using algorithm avx2x2 gen() 31089 MB/s Jan 23 18:57:55.764209 kernel: raid6: .... xor() 28291 MB/s, rmw enabled Jan 23 18:57:55.764234 kernel: raid6: using avx2x2 recovery algorithm Jan 23 18:57:55.978072 kernel: xor: automatically using best checksumming function avx Jan 23 18:57:56.174030 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:57:56.184116 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:57:56.187163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:57:56.220524 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jan 23 18:57:56.228416 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:57:56.232806 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:57:56.277815 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Jan 23 18:57:56.312303 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:57:56.314839 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:57:56.430476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:57:56.436340 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:57:56.523405 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 18:57:56.523460 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:57:56.580028 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 23 18:57:56.584130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:57:56.584273 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:56.590767 kernel: AES CTR mode by8 optimization enabled Jan 23 18:57:56.590804 kernel: scsi host0: Virtio SCSI HBA Jan 23 18:57:56.588332 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:56.611822 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 18:57:56.611885 kernel: libata version 3.00 loaded. Jan 23 18:57:56.612223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:57:56.616816 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:57:56.661198 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:57:56.667039 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 23 18:57:56.671402 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 23 18:57:56.674495 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 18:57:56.674727 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 23 18:57:56.684955 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 18:57:56.685388 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:57:56.708859 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:57:56.709239 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:57:56.709441 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:57:56.859065 kernel: scsi host1: ahci Jan 23 18:57:56.860151 kernel: scsi host2: ahci Jan 23 18:57:56.862057 kernel: scsi host3: ahci Jan 23 18:57:56.863018 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:57:56.863067 kernel: GPT:9289727 != 167739391 Jan 23 18:57:56.863080 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:57:56.863096 kernel: GPT:9289727 != 167739391 Jan 23 18:57:56.863111 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:57:56.863126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:57:56.863137 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 18:57:56.864005 kernel: scsi host4: ahci Jan 23 18:57:56.866325 kernel: scsi host5: ahci Jan 23 18:57:56.867552 kernel: scsi host6: ahci Jan 23 18:57:56.867798 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 1 Jan 23 18:57:56.867813 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 1 Jan 23 18:57:56.867824 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 1 Jan 23 18:57:56.867835 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 1 Jan 23 18:57:56.867846 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 1 Jan 23 18:57:56.867856 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 1 Jan 23 18:57:57.045420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:57:57.175805 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:57:57.175901 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:57:57.177991 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:57:57.180506 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:57:57.181002 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 18:57:57.186004 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:57:57.259767 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 18:57:57.277143 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 18:57:57.286000 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 18:57:57.286791 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 18:57:57.288783 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:57:57.300355 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 18:57:57.302427 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:57:57.303287 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:57:57.305212 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:57:57.309102 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:57:57.312031 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:57:57.325822 disk-uuid[610]: Primary Header is updated. Jan 23 18:57:57.325822 disk-uuid[610]: Secondary Entries is updated. Jan 23 18:57:57.325822 disk-uuid[610]: Secondary Header is updated. Jan 23 18:57:57.332847 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:57:57.339018 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:57:57.353003 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:57:58.357255 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:57:58.357313 disk-uuid[615]: The operation has completed successfully. Jan 23 18:57:58.424319 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:57:58.424513 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:57:58.459738 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:57:58.495265 sh[632]: Success Jan 23 18:57:58.517152 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:57:58.517219 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:57:58.519509 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:57:58.536041 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:57:58.593829 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:57:58.600092 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:57:58.611083 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:57:58.626038 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (645) Jan 23 18:57:58.633147 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:57:58.633180 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:57:58.645015 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 18:57:58.645141 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:57:58.645155 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:57:58.649960 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:57:58.652613 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:57:58.653761 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:57:58.656256 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:57:58.659124 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:57:58.705016 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (681) Jan 23 18:57:58.712407 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:57:58.712474 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:57:58.725052 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:57:58.725093 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:57:58.725107 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:57:58.736015 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:57:58.738911 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:57:58.743122 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:57:58.864838 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:57:58.903332 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:57:58.997408 systemd-networkd[814]: lo: Link UP Jan 23 18:57:58.998066 systemd-networkd[814]: lo: Gained carrier Jan 23 18:57:59.009320 systemd-networkd[814]: Enumeration completed Jan 23 18:57:59.010187 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:57:59.063938 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:57:59.063945 systemd-networkd[814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:57:59.087177 systemd-networkd[814]: eth0: Link UP Jan 23 18:57:59.087800 systemd-networkd[814]: eth0: Gained carrier Jan 23 18:57:59.087824 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:57:59.115786 systemd[1]: Reached target network.target - Network. Jan 23 18:57:59.341936 ignition[741]: Ignition 2.22.0 Jan 23 18:57:59.341958 ignition[741]: Stage: fetch-offline Jan 23 18:57:59.342050 ignition[741]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:57:59.342063 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:57:59.345827 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:57:59.342223 ignition[741]: parsed url from cmdline: "" Jan 23 18:57:59.342229 ignition[741]: no config URL provided Jan 23 18:57:59.342257 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:57:59.342270 ignition[741]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:57:59.342277 ignition[741]: failed to fetch config: resource requires networking Jan 23 18:57:59.342797 ignition[741]: Ignition finished successfully Jan 23 18:57:59.351187 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:57:59.523606 ignition[823]: Ignition 2.22.0 Jan 23 18:57:59.523630 ignition[823]: Stage: fetch Jan 23 18:57:59.523883 ignition[823]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:57:59.523913 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:57:59.524093 ignition[823]: parsed url from cmdline: "" Jan 23 18:57:59.524099 ignition[823]: no config URL provided Jan 23 18:57:59.524109 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:57:59.524122 ignition[823]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:57:59.524163 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 23 18:57:59.524564 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 18:57:59.725436 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #2 Jan 23 18:57:59.725679 ignition[823]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 18:58:00.030177 systemd-networkd[814]: eth0: DHCPv4 address 172.238.168.154/24, gateway 172.238.168.1 acquired from 23.192.120.212 Jan 23 18:58:00.126624 ignition[823]: PUT http://169.254.169.254/v1/token: attempt #3 Jan 23 18:58:00.227859 ignition[823]: PUT result: OK Jan 23 18:58:00.227969 ignition[823]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 23 18:58:00.339388 ignition[823]: GET result: OK Jan 23 18:58:00.339905 ignition[823]: parsing config with SHA512: f10ce7523b238d5cb169fcb548302dcffa9aad0a0e4959a8b812c47b2a84afc0ca870bb4a7a240871e50f34de582689edca6a1dcf1ba6eaf41ee2b1e32db552c Jan 23 18:58:00.380149 unknown[823]: fetched base config from "system" Jan 23 18:58:00.380173 unknown[823]: fetched base config from "system" Jan 23 18:58:00.380720 ignition[823]: fetch: fetch complete Jan 23 18:58:00.380183 unknown[823]: fetched user config from "akamai" Jan 23 18:58:00.380730 ignition[823]: fetch: fetch passed Jan 23 18:58:00.380807 ignition[823]: Ignition finished successfully Jan 23 18:58:00.387540 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:58:00.398186 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:58:00.437225 systemd-networkd[814]: eth0: Gained IPv6LL Jan 23 18:58:00.480728 ignition[830]: Ignition 2.22.0 Jan 23 18:58:00.480748 ignition[830]: Stage: kargs Jan 23 18:58:00.480962 ignition[830]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:00.484717 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:58:00.480976 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:58:00.481733 ignition[830]: kargs: kargs passed Jan 23 18:58:00.488212 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:58:00.481798 ignition[830]: Ignition finished successfully Jan 23 18:58:00.677373 ignition[837]: Ignition 2.22.0 Jan 23 18:58:00.677401 ignition[837]: Stage: disks Jan 23 18:58:00.677610 ignition[837]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:00.677628 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:58:00.678928 ignition[837]: disks: disks passed Jan 23 18:58:00.679034 ignition[837]: Ignition finished successfully Jan 23 18:58:00.681906 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:58:00.683959 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:58:00.685360 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:58:00.687466 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:58:00.689308 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:58:00.691183 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:58:00.694867 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:58:00.764860 systemd-fsck[845]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 18:58:00.770219 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:58:00.774682 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:58:00.933064 kernel: EXT4-fs (sda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:58:00.935125 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:58:00.936793 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:58:00.939844 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:58:00.944050 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:58:00.946125 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:58:00.947320 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:58:00.947355 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:58:00.957799 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:58:00.960888 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:58:00.971035 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (853) Jan 23 18:58:00.976211 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:58:00.976313 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:58:00.989014 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:58:00.989096 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:58:00.989160 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:58:00.994354 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:58:01.035223 initrd-setup-root[877]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:58:01.043049 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:58:01.048350 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:58:01.053237 initrd-setup-root[898]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:58:01.179068 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:58:01.182639 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:58:01.186124 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:58:01.220970 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:58:01.225370 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:58:01.246597 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:58:01.325012 ignition[967]: INFO : Ignition 2.22.0 Jan 23 18:58:01.325012 ignition[967]: INFO : Stage: mount Jan 23 18:58:01.327365 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:01.327365 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:58:01.347521 ignition[967]: INFO : mount: mount passed Jan 23 18:58:01.348733 ignition[967]: INFO : Ignition finished successfully Jan 23 18:58:01.350087 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:58:01.353406 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:58:01.937436 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:58:01.962126 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (979) Jan 23 18:58:01.966509 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:58:01.966589 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:58:01.975047 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:58:01.975094 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:58:01.977430 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:58:01.982680 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:58:02.117262 ignition[995]: INFO : Ignition 2.22.0 Jan 23 18:58:02.117262 ignition[995]: INFO : Stage: files Jan 23 18:58:02.119467 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:02.119467 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:58:02.119467 ignition[995]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:58:02.123017 ignition[995]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:58:02.123017 ignition[995]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:58:02.125525 ignition[995]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:58:02.125525 ignition[995]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:58:02.125525 ignition[995]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:58:02.125198 unknown[995]: wrote ssh authorized keys file for user: core Jan 23 18:58:02.130036 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:58:02.130036 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:58:02.443999 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:58:02.681371 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:58:02.683615 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:58:02.683615 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:58:02.683615 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:58:02.683615 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:58:02.683615 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:58:02.683615 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:58:02.683615 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:58:02.683615 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:58:02.693834 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:58:02.693834 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:58:02.693834 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:58:02.693834 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:58:02.693834 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:58:02.693834 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 18:58:03.181527 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:58:04.773874 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:58:04.773874 ignition[995]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:58:04.777821 ignition[995]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:58:04.777821 ignition[995]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:58:04.777821 ignition[995]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:58:04.777821 ignition[995]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 18:58:04.784429 ignition[995]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 18:58:04.784429 ignition[995]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 18:58:04.784429 ignition[995]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 18:58:04.784429 ignition[995]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:58:04.784429 ignition[995]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:58:04.784429 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:58:04.784429 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:58:04.784429 ignition[995]: INFO : files: files passed Jan 23 18:58:04.784429 ignition[995]: INFO : Ignition finished successfully Jan 23 18:58:04.785331 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:58:04.792188 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:58:04.796189 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:58:04.829959 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:58:04.831148 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:58:04.867775 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:58:04.869215 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:58:04.869215 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:58:04.871144 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:58:04.873460 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:58:04.877110 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:58:04.936197 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:58:04.936358 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:58:04.938260 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:58:04.939627 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:58:04.941452 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:58:04.943130 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:58:04.989681 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:58:04.992876 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:58:05.016436 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:58:05.017466 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:58:05.019517 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:58:05.021420 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:58:05.021552 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:58:05.023629 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:58:05.024915 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:58:05.027053 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:58:05.028650 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:58:05.030250 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:58:05.031959 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:58:05.033667 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:58:05.035306 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:58:05.037436 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:58:05.039337 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:58:05.041304 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:58:05.042753 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:58:05.042949 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:58:05.044748 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:58:05.045875 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:58:05.047268 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:58:05.047406 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:58:05.048893 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:58:05.049102 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:58:05.051187 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:58:05.051338 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:58:05.052484 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:58:05.052639 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:58:05.056172 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:58:05.059217 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:58:05.061092 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:58:05.061291 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:58:05.064661 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:58:05.064771 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:58:05.073877 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:58:05.074020 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:58:05.119386 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:58:05.140541 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:58:05.141373 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:58:05.153055 ignition[1050]: INFO : Ignition 2.22.0 Jan 23 18:58:05.153055 ignition[1050]: INFO : Stage: umount Jan 23 18:58:05.155269 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:58:05.155269 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:58:05.155269 ignition[1050]: INFO : umount: umount passed Jan 23 18:58:05.155269 ignition[1050]: INFO : Ignition finished successfully Jan 23 18:58:05.156595 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:58:05.156767 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:58:05.158510 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:58:05.158856 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:58:05.159922 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:58:05.160032 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:58:05.161439 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:58:05.161508 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:58:05.162896 systemd[1]: Stopped target network.target - Network. Jan 23 18:58:05.164436 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:58:05.164508 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:58:05.166322 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:58:05.167893 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:58:05.168221 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:58:05.169464 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:58:05.171474 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:58:05.173300 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:58:05.173367 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:58:05.174886 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:58:05.174940 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:58:05.176340 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:58:05.176421 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:58:05.177803 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:58:05.177874 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:58:05.179429 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:58:05.179499 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:58:05.181276 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:58:05.183184 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:58:05.195388 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:58:05.195838 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:58:05.200147 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:58:05.200495 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:58:05.200659 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:58:05.206193 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:58:05.206900 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:58:05.208873 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:58:05.208927 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:58:05.212146 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:58:05.216457 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:58:05.216533 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:58:05.219298 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:58:05.219368 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:58:05.223618 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:58:05.223684 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:58:05.225051 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:58:05.225134 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:58:05.227155 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:58:05.233484 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:58:05.233562 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:58:05.251375 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:58:05.251790 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:58:05.260191 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:58:05.260501 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:58:05.261751 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:58:05.261816 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:58:05.262864 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:58:05.262915 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:58:05.264684 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:58:05.264747 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:58:05.267351 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:58:05.267417 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:58:05.269154 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:58:05.269221 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:58:05.271943 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:58:05.273479 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:58:05.273585 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:58:05.276522 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:58:05.276589 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:58:05.279213 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 18:58:05.279275 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:58:05.283211 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:58:05.283269 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:58:05.286047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:58:05.286120 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:58:05.290426 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:58:05.290493 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 18:58:05.290738 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:58:05.290793 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:58:05.291289 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:58:05.291424 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:58:05.293314 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:58:05.295591 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:58:05.314814 systemd[1]: Switching root. Jan 23 18:58:05.353763 systemd-journald[187]: Journal stopped Jan 23 18:58:06.883957 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 18:58:06.884070 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:58:06.884086 kernel: SELinux: policy capability open_perms=1 Jan 23 18:58:06.884097 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:58:06.884107 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:58:06.884121 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:58:06.884133 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:58:06.884143 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:58:06.884154 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:58:06.884164 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:58:06.884175 kernel: audit: type=1403 audit(1769194685.572:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:58:06.884187 systemd[1]: Successfully loaded SELinux policy in 84.580ms. Jan 23 18:58:06.884202 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.680ms. Jan 23 18:58:06.884215 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:58:06.884228 systemd[1]: Detected virtualization kvm. Jan 23 18:58:06.884239 systemd[1]: Detected architecture x86-64. Jan 23 18:58:06.884253 systemd[1]: Detected first boot. Jan 23 18:58:06.884265 systemd[1]: Initializing machine ID from random generator. Jan 23 18:58:06.884277 zram_generator::config[1092]: No configuration found. Jan 23 18:58:06.884289 kernel: Guest personality initialized and is inactive Jan 23 18:58:06.884300 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:58:06.884311 kernel: Initialized host personality Jan 23 18:58:06.884322 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:58:06.884333 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:58:06.884349 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:58:06.884360 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:58:06.884372 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:58:06.884383 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:58:06.884395 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:58:06.884407 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:58:06.884419 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:58:06.884433 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:58:06.884444 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:58:06.884456 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:58:06.884467 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:58:06.884479 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:58:06.884490 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:58:06.884502 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:58:06.884513 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:58:06.884527 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:58:06.884544 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:58:06.884556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:58:06.884568 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:58:06.884580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:58:06.884591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:58:06.884603 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:58:06.884617 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:58:06.884629 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:58:06.884641 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:58:06.884652 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:58:06.884664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:58:06.884676 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:58:06.884687 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:58:06.884699 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:58:06.884711 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:58:06.884725 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:58:06.884737 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:58:06.884759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:58:06.884771 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:58:06.884786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:58:06.884798 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:58:06.884810 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:58:06.884822 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:58:06.884834 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:06.884845 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:58:06.884857 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:58:06.884869 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:58:06.884884 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:58:06.884896 systemd[1]: Reached target machines.target - Containers. Jan 23 18:58:06.884907 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:58:06.884919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:58:06.884931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:58:06.884943 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:58:06.884955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:58:06.884966 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:58:06.885019 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:58:06.885037 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:58:06.885049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:58:06.885071 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:58:06.885083 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:58:06.885095 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:58:06.885106 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:58:06.885118 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:58:06.885131 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:58:06.885146 kernel: fuse: init (API version 7.41) Jan 23 18:58:06.885157 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:58:06.885169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:58:06.885181 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:58:06.885193 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:58:06.885205 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:58:06.885216 kernel: ACPI: bus type drm_connector registered Jan 23 18:58:06.885228 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:58:06.885242 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:58:06.885254 systemd[1]: Stopped verity-setup.service. Jan 23 18:58:06.885266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:06.885278 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:58:06.885289 kernel: loop: module loaded Jan 23 18:58:06.885301 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:58:06.885312 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:58:06.885330 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:58:06.885342 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:58:06.885357 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:58:06.885403 systemd-journald[1173]: Collecting audit messages is disabled. Jan 23 18:58:06.885427 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:58:06.885439 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:58:06.885454 systemd-journald[1173]: Journal started Jan 23 18:58:06.885475 systemd-journald[1173]: Runtime Journal (/run/log/journal/b86f2ec244af433b916751456d40a36e) is 8M, max 78.2M, 70.2M free. Jan 23 18:58:06.376271 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:58:06.393779 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 18:58:06.394864 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:58:06.895138 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:58:06.903132 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:58:06.907520 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:58:06.909038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:58:06.909440 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:58:06.910592 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:58:06.910903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:58:06.912100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:58:06.912404 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:58:06.913631 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:58:06.913922 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:58:06.915071 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:58:06.915288 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:58:06.916496 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:58:06.917710 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:58:06.918918 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:58:07.003482 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:58:07.025892 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:58:07.026825 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:58:07.026882 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:58:07.029341 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:58:07.135294 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:58:07.136460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:58:07.142229 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:58:07.146011 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:58:07.148102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:58:07.151137 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:58:07.151967 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:58:07.155062 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:58:07.161351 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:58:07.165058 systemd-journald[1173]: Time spent on flushing to /var/log/journal/b86f2ec244af433b916751456d40a36e is 74.600ms for 1003 entries. Jan 23 18:58:07.165058 systemd-journald[1173]: System Journal (/var/log/journal/b86f2ec244af433b916751456d40a36e) is 8M, max 195.6M, 187.6M free. Jan 23 18:58:07.311058 systemd-journald[1173]: Received client request to flush runtime journal. Jan 23 18:58:07.311119 kernel: loop0: detected capacity change from 0 to 219144 Jan 23 18:58:07.170191 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:58:07.200001 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:58:07.204271 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:58:07.227328 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:58:07.235159 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:58:07.300936 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:58:07.302123 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:58:07.317074 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:58:07.325959 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:58:07.436252 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:58:07.443526 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:58:07.448080 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:58:07.480820 kernel: loop1: detected capacity change from 0 to 110984 Jan 23 18:58:07.482565 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 23 18:58:07.482589 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jan 23 18:58:07.487939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:58:07.520594 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:58:07.524688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:58:07.540694 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:58:07.603703 kernel: loop2: detected capacity change from 0 to 8 Jan 23 18:58:07.698674 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 18:58:07.813020 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:58:07.825700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:58:07.886189 kernel: loop4: detected capacity change from 0 to 219144 Jan 23 18:58:07.875958 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 23 18:58:07.875971 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 23 18:58:07.881370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:58:07.923029 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 18:58:08.022035 kernel: loop6: detected capacity change from 0 to 8 Jan 23 18:58:08.052400 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 18:58:08.099789 (sd-merge)[1245]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jan 23 18:58:08.100949 (sd-merge)[1245]: Merged extensions into '/usr'. Jan 23 18:58:08.115593 systemd[1]: Reload requested from client PID 1216 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:58:08.116079 systemd[1]: Reloading... Jan 23 18:58:08.622016 zram_generator::config[1273]: No configuration found. Jan 23 18:58:09.033902 systemd[1]: Reloading finished in 916 ms. Jan 23 18:58:09.050069 ldconfig[1211]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:58:09.056342 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:58:09.057885 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:58:09.059196 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:58:09.069493 systemd[1]: Starting ensure-sysext.service... Jan 23 18:58:09.074108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:58:09.090120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:58:09.126288 systemd[1]: Reload requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:58:09.126303 systemd[1]: Reloading... Jan 23 18:58:09.159141 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:58:09.159835 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:58:09.160371 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:58:09.160968 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:58:09.162410 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:58:09.163059 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 23 18:58:09.163217 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 23 18:58:09.177769 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:58:09.177901 systemd-tmpfiles[1318]: Skipping /boot Jan 23 18:58:09.213959 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:58:09.214550 systemd-tmpfiles[1318]: Skipping /boot Jan 23 18:58:09.259425 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Jan 23 18:58:09.342071 zram_generator::config[1346]: No configuration found. Jan 23 18:58:09.631065 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:58:09.757538 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:58:09.757780 systemd[1]: Reloading finished in 630 ms. Jan 23 18:58:09.767961 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:58:09.771082 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:58:09.772093 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 18:58:09.784025 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:58:09.820836 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:09.825227 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:58:09.829414 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:58:09.830375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:58:09.833320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:58:09.961648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:58:09.972825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:58:09.973854 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:58:09.973966 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:58:10.013394 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:58:10.034412 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:58:10.054154 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:58:10.058080 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:58:10.060245 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:10.085256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:58:10.085528 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:58:10.107225 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:58:10.107501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:58:10.112919 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:10.113284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:58:10.119281 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:58:10.119581 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:58:10.124237 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:58:10.137237 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:58:10.138206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:58:10.138331 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:58:10.141949 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:58:10.142760 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:58:10.144172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:58:10.145082 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:58:10.176492 systemd[1]: Finished ensure-sysext.service. Jan 23 18:58:10.184935 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:58:10.186168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:58:10.187440 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:58:10.187868 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:58:10.190509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:58:10.190815 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:58:10.196210 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 18:58:10.197620 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:58:10.227940 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:58:10.231050 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:58:10.259826 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:58:10.260890 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:58:10.307237 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:58:10.314953 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:58:10.416007 augenrules[1492]: No rules Jan 23 18:58:10.420558 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:58:10.423075 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:58:10.428783 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 18:58:10.436291 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:58:10.465038 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:58:10.485039 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:58:10.545006 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:58:10.713965 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 18:58:10.718500 systemd-networkd[1438]: lo: Link UP Jan 23 18:58:10.718516 systemd-networkd[1438]: lo: Gained carrier Jan 23 18:58:10.722875 systemd-networkd[1438]: Enumeration completed Jan 23 18:58:10.724161 systemd-timesyncd[1460]: No network connectivity, watching for changes. Jan 23 18:58:10.725364 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:58:10.725380 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:58:10.729745 systemd-networkd[1438]: eth0: Link UP Jan 23 18:58:10.729944 systemd-networkd[1438]: eth0: Gained carrier Jan 23 18:58:10.729972 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:58:10.736920 systemd-resolved[1439]: Positive Trust Anchors: Jan 23 18:58:10.736941 systemd-resolved[1439]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:58:10.736971 systemd-resolved[1439]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:58:10.743606 systemd-resolved[1439]: Defaulting to hostname 'linux'. Jan 23 18:58:10.783444 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:58:10.784403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:58:10.785640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:58:10.787443 systemd[1]: Reached target network.target - Network. Jan 23 18:58:10.788216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:58:10.789021 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:58:10.790211 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:58:10.791191 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:58:10.792020 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:58:10.792785 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:58:10.793585 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:58:10.793626 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:58:10.794339 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:58:10.795352 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:58:10.796278 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:58:10.803633 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:58:10.806172 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:58:10.808791 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:58:10.811843 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:58:10.812841 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:58:10.813697 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:58:10.817201 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:58:10.818400 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:58:10.820836 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:58:10.823350 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:58:10.826778 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:58:10.828547 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:58:10.829304 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:58:10.830242 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:58:10.830300 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:58:10.846165 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:58:10.853173 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:58:10.856608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:58:10.866035 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:58:10.875194 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:58:10.883449 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:58:10.884841 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:58:10.888514 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:58:10.894256 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:58:10.911055 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:58:10.921519 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:58:10.928209 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 23 18:58:10.928501 oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 23 18:58:10.931723 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 23 18:58:10.931770 oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 23 18:58:10.931838 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:58:10.931885 oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:58:10.932003 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 23 18:58:10.932041 oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 23 18:58:10.932205 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:58:10.932652 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 23 18:58:10.932705 oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 23 18:58:10.932766 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:58:10.932794 oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:58:10.935399 extend-filesystems[1519]: Found /dev/sda6 Jan 23 18:58:10.943204 extend-filesystems[1519]: Found /dev/sda9 Jan 23 18:58:10.958018 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:58:10.960082 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:58:10.960827 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:58:10.961676 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:58:10.972357 jq[1518]: false Jan 23 18:58:10.974390 extend-filesystems[1519]: Checking size of /dev/sda9 Jan 23 18:58:10.974854 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:58:10.984418 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:58:10.990571 coreos-metadata[1515]: Jan 23 18:58:10.990 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 18:58:11.012763 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:58:11.014059 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:58:11.014316 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:58:11.014775 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:58:11.015050 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:58:11.016145 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:58:11.016401 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:58:11.029818 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:58:11.030239 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:58:11.206544 extend-filesystems[1519]: Resized partition /dev/sda9 Jan 23 18:58:11.209811 jq[1537]: true Jan 23 18:58:11.218185 update_engine[1536]: I20260123 18:58:11.218020 1536 main.cc:92] Flatcar Update Engine starting Jan 23 18:58:11.230111 extend-filesystems[1553]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:58:11.249999 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jan 23 18:58:11.262915 jq[1559]: true Jan 23 18:58:11.270848 tar[1545]: linux-amd64/LICENSE Jan 23 18:58:11.276476 (ntainerd)[1564]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:58:11.603564 update_engine[1536]: I20260123 18:58:11.443475 1536 update_check_scheduler.cc:74] Next update check in 3m23s Jan 23 18:58:11.426407 dbus-daemon[1516]: [system] SELinux support is enabled Jan 23 18:58:11.603893 tar[1545]: linux-amd64/helm Jan 23 18:58:11.426998 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:58:11.447610 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:58:11.515914 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:58:11.515965 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:58:11.518454 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:58:11.518478 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:58:11.525245 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:58:11.575536 systemd-logind[1533]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 18:58:11.575582 systemd-logind[1533]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:58:11.611867 bash[1584]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:58:11.577410 systemd-logind[1533]: New seat seat0. Jan 23 18:58:11.582122 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:58:11.611936 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:58:11.618923 systemd[1]: Starting sshkeys.service... Jan 23 18:58:11.779063 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 18:58:11.826377 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 18:58:12.029926 locksmithd[1585]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:58:12.093090 coreos-metadata[1515]: Jan 23 18:58:12.078 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 18:58:12.127408 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:58:12.561206 coreos-metadata[1588]: Jan 23 18:58:12.547 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 18:58:12.596161 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:58:12.602213 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:58:12.730403 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:58:12.731083 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:58:12.755127 systemd-networkd[1438]: eth0: Gained IPv6LL Jan 23 18:58:12.758009 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Jan 23 18:58:12.766255 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:58:12.886288 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:58:12.893299 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:58:12.898258 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:58:12.900478 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:58:12.951366 containerd[1564]: time="2026-01-23T18:58:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:58:12.954020 containerd[1564]: time="2026-01-23T18:58:12.953913091Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:58:13.022020 containerd[1564]: time="2026-01-23T18:58:13.021223025Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="41.33µs" Jan 23 18:58:13.022020 containerd[1564]: time="2026-01-23T18:58:13.021393375Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:58:13.022020 containerd[1564]: time="2026-01-23T18:58:13.021469905Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:58:13.022020 containerd[1564]: time="2026-01-23T18:58:13.021916275Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:58:13.022020 containerd[1564]: time="2026-01-23T18:58:13.021949845Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:58:13.022455 containerd[1564]: time="2026-01-23T18:58:13.022425825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:58:13.022690 containerd[1564]: time="2026-01-23T18:58:13.022661275Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:58:13.023203 containerd[1564]: time="2026-01-23T18:58:13.023181396Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:58:13.023761 containerd[1564]: time="2026-01-23T18:58:13.023737966Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:58:13.023823 containerd[1564]: time="2026-01-23T18:58:13.023808306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:58:13.023904 containerd[1564]: time="2026-01-23T18:58:13.023875936Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:58:13.024829 containerd[1564]: time="2026-01-23T18:58:13.023965496Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:58:13.024829 containerd[1564]: time="2026-01-23T18:58:13.024139216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:58:13.024829 containerd[1564]: time="2026-01-23T18:58:13.024710576Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:58:13.024829 containerd[1564]: time="2026-01-23T18:58:13.024753796Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:58:13.024829 containerd[1564]: time="2026-01-23T18:58:13.024766406Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:58:13.025237 containerd[1564]: time="2026-01-23T18:58:13.025217197Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:58:13.025591 containerd[1564]: time="2026-01-23T18:58:13.025572247Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:58:13.025770 containerd[1564]: time="2026-01-23T18:58:13.025753147Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:58:13.047138 containerd[1564]: time="2026-01-23T18:58:13.047043638Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048256618Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048299858Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048319588Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048336998Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048356008Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048373388Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048393628Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048406448Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048417208Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048427318Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048440828Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048681078Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048724238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:58:13.050002 containerd[1564]: time="2026-01-23T18:58:13.048750908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.048763348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.048774428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.048798708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.048813328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.048829548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.048846328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.048856938Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.048867468Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.049031129Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.049061559Z" level=info msg="Start snapshots syncer" Jan 23 18:58:13.050293 containerd[1564]: time="2026-01-23T18:58:13.049110769Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:58:13.050510 containerd[1564]: time="2026-01-23T18:58:13.049532489Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:58:13.050510 containerd[1564]: time="2026-01-23T18:58:13.049607579Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:58:13.051066 containerd[1564]: time="2026-01-23T18:58:13.049700709Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:58:13.051234 containerd[1564]: time="2026-01-23T18:58:13.051192900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:58:13.051337 containerd[1564]: time="2026-01-23T18:58:13.051320780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:58:13.051420 containerd[1564]: time="2026-01-23T18:58:13.051405760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:58:13.051510 containerd[1564]: time="2026-01-23T18:58:13.051489920Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:58:13.051663 containerd[1564]: time="2026-01-23T18:58:13.051643420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:58:13.051763 containerd[1564]: time="2026-01-23T18:58:13.051745810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:58:13.178196 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jan 23 18:58:13.158923 systemd-networkd[1438]: eth0: DHCPv4 address 172.238.168.154/24, gateway 172.238.168.1 acquired from 23.192.120.212 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.085845547Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086068077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086092877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086109947Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086224197Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086248947Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086261267Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086462097Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086474337Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086484437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086515557Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086550197Z" level=info msg="runtime interface created" Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086556837Z" level=info msg="created NRI interface" Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086566727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:58:13.178469 containerd[1564]: time="2026-01-23T18:58:13.086584767Z" level=info msg="Connect containerd service" Jan 23 18:58:13.160444 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Jan 23 18:58:13.179016 containerd[1564]: time="2026-01-23T18:58:13.086617517Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:58:13.179016 containerd[1564]: time="2026-01-23T18:58:13.089170389Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:58:13.164570 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Jan 23 18:58:13.166188 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Jan 23 18:58:13.179521 dbus-daemon[1516]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1438 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 18:58:13.187045 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 18:58:13.196197 extend-filesystems[1553]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 18:58:13.196197 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 23 18:58:13.196197 extend-filesystems[1553]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jan 23 18:58:13.194862 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:58:13.206344 extend-filesystems[1519]: Resized filesystem in /dev/sda9 Jan 23 18:58:13.195257 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:58:13.202558 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:58:13.208491 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:58:13.222119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:13.226630 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:58:13.644065 coreos-metadata[1588]: Jan 23 18:58:13.643 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 18:58:13.784458 coreos-metadata[1588]: Jan 23 18:58:13.778 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 23 18:58:13.849923 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:58:13.880249 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 18:58:13.883669 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 18:58:13.924927 dbus-daemon[1516]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1624 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 18:58:13.930852 coreos-metadata[1588]: Jan 23 18:58:13.930 INFO Fetch successful Jan 23 18:58:13.982165 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:58:14.058349 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 18:58:14.080366 systemd[1]: Started sshd@0-172.238.168.154:22-68.220.241.50:41736.service - OpenSSH per-connection server daemon (68.220.241.50:41736). Jan 23 18:58:14.227363 coreos-metadata[1515]: Jan 23 18:58:14.206 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Jan 23 18:58:14.235938 update-ssh-keys[1651]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:58:14.235178 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 18:58:14.244327 systemd[1]: Finished sshkeys.service. Jan 23 18:58:14.268576 containerd[1564]: time="2026-01-23T18:58:14.268476158Z" level=info msg="Start subscribing containerd event" Jan 23 18:58:14.269276 containerd[1564]: time="2026-01-23T18:58:14.268660318Z" level=info msg="Start recovering state" Jan 23 18:58:14.269276 containerd[1564]: time="2026-01-23T18:58:14.269151858Z" level=info msg="Start event monitor" Jan 23 18:58:14.269276 containerd[1564]: time="2026-01-23T18:58:14.269169818Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:58:14.269276 containerd[1564]: time="2026-01-23T18:58:14.269202038Z" level=info msg="Start streaming server" Jan 23 18:58:14.269276 containerd[1564]: time="2026-01-23T18:58:14.269246068Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:58:14.269276 containerd[1564]: time="2026-01-23T18:58:14.269259378Z" level=info msg="runtime interface starting up..." Jan 23 18:58:14.269276 containerd[1564]: time="2026-01-23T18:58:14.269266568Z" level=info msg="starting plugins..." Jan 23 18:58:14.269446 containerd[1564]: time="2026-01-23T18:58:14.269291598Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:58:14.273549 containerd[1564]: time="2026-01-23T18:58:14.273493690Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:58:14.273725 containerd[1564]: time="2026-01-23T18:58:14.273692950Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:58:14.279902 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Jan 23 18:58:14.285110 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:58:14.290129 containerd[1564]: time="2026-01-23T18:58:14.289908059Z" level=info msg="containerd successfully booted in 1.359178s" Jan 23 18:58:14.328041 coreos-metadata[1515]: Jan 23 18:58:14.327 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 23 18:58:14.548087 polkitd[1653]: Started polkitd version 126 Jan 23 18:58:14.559733 polkitd[1653]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 18:58:14.560953 polkitd[1653]: Loading rules from directory /run/polkit-1/rules.d Jan 23 18:58:14.561767 polkitd[1653]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 18:58:14.562247 polkitd[1653]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 18:58:14.562327 polkitd[1653]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 18:58:14.562481 polkitd[1653]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 18:58:14.564372 polkitd[1653]: Finished loading, compiling and executing 2 rules Jan 23 18:58:14.564951 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 18:58:14.568183 coreos-metadata[1515]: Jan 23 18:58:14.567 INFO Fetch successful Jan 23 18:58:14.568183 coreos-metadata[1515]: Jan 23 18:58:14.567 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 23 18:58:14.568023 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 18:58:14.568745 polkitd[1653]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 18:58:14.660577 systemd-hostnamed[1624]: Hostname set to <172-238-168-154> (transient) Jan 23 18:58:14.660888 systemd-resolved[1439]: System hostname changed to '172-238-168-154'. Jan 23 18:58:14.663208 sshd[1654]: Accepted publickey for core from 68.220.241.50 port 41736 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:58:14.678489 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:14.726945 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:58:14.732288 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:58:14.829881 coreos-metadata[1515]: Jan 23 18:58:14.829 INFO Fetch successful Jan 23 18:58:14.866286 systemd-logind[1533]: New session 1 of user core. Jan 23 18:58:14.913031 tar[1545]: linux-amd64/README.md Jan 23 18:58:14.924595 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:58:14.932404 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:58:14.958034 (systemd)[1679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:58:14.967663 systemd-logind[1533]: New session c1 of user core. Jan 23 18:58:14.978335 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:58:15.101668 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:58:15.104581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:58:15.169632 systemd[1679]: Queued start job for default target default.target. Jan 23 18:58:15.186326 systemd[1679]: Created slice app.slice - User Application Slice. Jan 23 18:58:15.186953 systemd[1679]: Reached target paths.target - Paths. Jan 23 18:58:15.187034 systemd[1679]: Reached target timers.target - Timers. Jan 23 18:58:15.192110 systemd[1679]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:58:15.264882 systemd[1679]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:58:15.265208 systemd[1679]: Reached target sockets.target - Sockets. Jan 23 18:58:15.265543 systemd[1679]: Reached target basic.target - Basic System. Jan 23 18:58:15.265765 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:58:15.265895 systemd[1679]: Reached target default.target - Main User Target. Jan 23 18:58:15.265957 systemd[1679]: Startup finished in 282ms. Jan 23 18:58:15.282196 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:58:15.619677 systemd[1]: Started sshd@1-172.238.168.154:22-68.220.241.50:33080.service - OpenSSH per-connection server daemon (68.220.241.50:33080). Jan 23 18:58:15.735176 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Jan 23 18:58:15.917113 sshd[1704]: Accepted publickey for core from 68.220.241.50 port 33080 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:58:15.916544 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:15.927093 systemd-logind[1533]: New session 2 of user core. Jan 23 18:58:15.933128 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:58:16.101015 sshd[1707]: Connection closed by 68.220.241.50 port 33080 Jan 23 18:58:16.101646 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:16.107568 systemd[1]: sshd@1-172.238.168.154:22-68.220.241.50:33080.service: Deactivated successfully. Jan 23 18:58:16.108669 systemd-logind[1533]: Session 2 logged out. Waiting for processes to exit. Jan 23 18:58:16.111016 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 18:58:16.115206 systemd-logind[1533]: Removed session 2. Jan 23 18:58:16.136889 systemd[1]: Started sshd@2-172.238.168.154:22-68.220.241.50:33096.service - OpenSSH per-connection server daemon (68.220.241.50:33096). Jan 23 18:58:16.335306 sshd[1713]: Accepted publickey for core from 68.220.241.50 port 33096 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:58:16.424834 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:16.433042 systemd-logind[1533]: New session 3 of user core. Jan 23 18:58:16.438133 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:58:16.618400 sshd[1716]: Connection closed by 68.220.241.50 port 33096 Jan 23 18:58:16.621385 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:16.627572 systemd-logind[1533]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:58:16.629417 systemd[1]: sshd@2-172.238.168.154:22-68.220.241.50:33096.service: Deactivated successfully. Jan 23 18:58:16.632302 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:58:16.635336 systemd-logind[1533]: Removed session 3. Jan 23 18:58:17.569231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:17.573427 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:58:17.575202 systemd[1]: Startup finished in 6.279s (kernel) + 10.836s (initrd) + 12.084s (userspace) = 29.200s. Jan 23 18:58:17.579446 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:18.919381 kubelet[1726]: E0123 18:58:18.919141 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:18.924203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:18.924482 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:18.925703 systemd[1]: kubelet.service: Consumed 3.784s CPU time, 256.9M memory peak. Jan 23 18:58:26.660553 systemd[1]: Started sshd@3-172.238.168.154:22-68.220.241.50:49810.service - OpenSSH per-connection server daemon (68.220.241.50:49810). Jan 23 18:58:26.871027 sshd[1738]: Accepted publickey for core from 68.220.241.50 port 49810 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:58:26.872810 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:26.882497 systemd-logind[1533]: New session 4 of user core. Jan 23 18:58:26.895145 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:58:27.027272 sshd[1741]: Connection closed by 68.220.241.50 port 49810 Jan 23 18:58:27.029264 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:27.033679 systemd[1]: sshd@3-172.238.168.154:22-68.220.241.50:49810.service: Deactivated successfully. Jan 23 18:58:27.036500 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:58:27.038544 systemd-logind[1533]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:58:27.040935 systemd-logind[1533]: Removed session 4. Jan 23 18:58:27.060129 systemd[1]: Started sshd@4-172.238.168.154:22-68.220.241.50:49824.service - OpenSSH per-connection server daemon (68.220.241.50:49824). Jan 23 18:58:27.236515 sshd[1747]: Accepted publickey for core from 68.220.241.50 port 49824 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:58:27.238387 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:27.245796 systemd-logind[1533]: New session 5 of user core. Jan 23 18:58:27.255157 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:58:27.371250 sshd[1750]: Connection closed by 68.220.241.50 port 49824 Jan 23 18:58:27.372529 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:27.376960 systemd[1]: sshd@4-172.238.168.154:22-68.220.241.50:49824.service: Deactivated successfully. Jan 23 18:58:27.379646 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:58:27.380914 systemd-logind[1533]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:58:27.382837 systemd-logind[1533]: Removed session 5. Jan 23 18:58:27.401174 systemd[1]: Started sshd@5-172.238.168.154:22-68.220.241.50:49832.service - OpenSSH per-connection server daemon (68.220.241.50:49832). Jan 23 18:58:27.579114 sshd[1756]: Accepted publickey for core from 68.220.241.50 port 49832 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:58:27.581169 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:27.588738 systemd-logind[1533]: New session 6 of user core. Jan 23 18:58:27.596136 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:58:27.713403 sshd[1759]: Connection closed by 68.220.241.50 port 49832 Jan 23 18:58:27.715186 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:27.719947 systemd-logind[1533]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:58:27.720523 systemd[1]: sshd@5-172.238.168.154:22-68.220.241.50:49832.service: Deactivated successfully. Jan 23 18:58:27.723413 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:58:27.725647 systemd-logind[1533]: Removed session 6. Jan 23 18:58:27.744313 systemd[1]: Started sshd@6-172.238.168.154:22-68.220.241.50:49836.service - OpenSSH per-connection server daemon (68.220.241.50:49836). Jan 23 18:58:27.912048 sshd[1765]: Accepted publickey for core from 68.220.241.50 port 49836 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:58:27.913364 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:27.919190 systemd-logind[1533]: New session 7 of user core. Jan 23 18:58:27.925117 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:58:28.029793 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:58:28.030196 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:28.047857 sudo[1769]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:28.069306 sshd[1768]: Connection closed by 68.220.241.50 port 49836 Jan 23 18:58:28.071213 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:28.075762 systemd[1]: sshd@6-172.238.168.154:22-68.220.241.50:49836.service: Deactivated successfully. Jan 23 18:58:28.078630 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:58:28.081592 systemd-logind[1533]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:58:28.082695 systemd-logind[1533]: Removed session 7. Jan 23 18:58:28.100329 systemd[1]: Started sshd@7-172.238.168.154:22-68.220.241.50:49838.service - OpenSSH per-connection server daemon (68.220.241.50:49838). Jan 23 18:58:28.266333 sshd[1775]: Accepted publickey for core from 68.220.241.50 port 49838 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:58:28.268347 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:28.274794 systemd-logind[1533]: New session 8 of user core. Jan 23 18:58:28.279100 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:58:28.376768 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:58:28.377167 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:28.383078 sudo[1780]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:28.390323 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:58:28.390707 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:28.402833 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:58:28.466333 augenrules[1802]: No rules Jan 23 18:58:28.469015 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:58:28.469531 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:58:28.471285 sudo[1779]: pam_unix(sudo:session): session closed for user root Jan 23 18:58:28.493323 sshd[1778]: Connection closed by 68.220.241.50 port 49838 Jan 23 18:58:28.495266 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jan 23 18:58:28.499559 systemd[1]: sshd@7-172.238.168.154:22-68.220.241.50:49838.service: Deactivated successfully. Jan 23 18:58:28.501975 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:58:28.504187 systemd-logind[1533]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:58:28.505866 systemd-logind[1533]: Removed session 8. Jan 23 18:58:28.531058 systemd[1]: Started sshd@8-172.238.168.154:22-68.220.241.50:49852.service - OpenSSH per-connection server daemon (68.220.241.50:49852). Jan 23 18:58:28.729682 sshd[1811]: Accepted publickey for core from 68.220.241.50 port 49852 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:58:28.731724 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:58:28.739040 systemd-logind[1533]: New session 9 of user core. Jan 23 18:58:28.749112 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:58:28.852463 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:58:28.853024 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:58:29.175541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:58:29.182302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:29.815553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:29.822178 (kubelet)[1834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:30.150275 kubelet[1834]: E0123 18:58:30.149789 1834 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:30.174524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:30.174801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:30.175347 systemd[1]: kubelet.service: Consumed 802ms CPU time, 110.2M memory peak. Jan 23 18:58:30.947788 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:58:30.998083 (dockerd)[1847]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:58:32.336123 dockerd[1847]: time="2026-01-23T18:58:32.335852785Z" level=info msg="Starting up" Jan 23 18:58:32.338663 dockerd[1847]: time="2026-01-23T18:58:32.338636407Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:58:32.384284 dockerd[1847]: time="2026-01-23T18:58:32.384198759Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:58:32.417268 systemd[1]: var-lib-docker-metacopy\x2dcheck2278414404-merged.mount: Deactivated successfully. Jan 23 18:58:32.440837 dockerd[1847]: time="2026-01-23T18:58:32.440758678Z" level=info msg="Loading containers: start." Jan 23 18:58:32.460048 kernel: Initializing XFRM netlink socket Jan 23 18:58:32.892295 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Jan 23 18:58:32.960803 systemd-networkd[1438]: docker0: Link UP Jan 23 18:58:32.966648 dockerd[1847]: time="2026-01-23T18:58:32.966578860Z" level=info msg="Loading containers: done." Jan 23 18:58:33.030698 dockerd[1847]: time="2026-01-23T18:58:33.030621102Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:58:33.030939 dockerd[1847]: time="2026-01-23T18:58:33.030775663Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:58:33.030939 dockerd[1847]: time="2026-01-23T18:58:33.030909403Z" level=info msg="Initializing buildkit" Jan 23 18:58:33.031862 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck512473339-merged.mount: Deactivated successfully. Jan 23 18:58:33.065917 dockerd[1847]: time="2026-01-23T18:58:33.065842040Z" level=info msg="Completed buildkit initialization" Jan 23 18:58:33.074063 dockerd[1847]: time="2026-01-23T18:58:33.073708464Z" level=info msg="Daemon has completed initialization" Jan 23 18:58:33.074392 dockerd[1847]: time="2026-01-23T18:58:33.074254234Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:58:33.074771 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:58:34.441532 systemd-resolved[1439]: Clock change detected. Flushing caches. Jan 23 18:58:34.442112 systemd-timesyncd[1460]: Contacted time server [2600:3c06::f03c:94ff:fee2:c53a]:123 (2.flatcar.pool.ntp.org). Jan 23 18:58:34.442195 systemd-timesyncd[1460]: Initial clock synchronization to Fri 2026-01-23 18:58:34.440256 UTC. Jan 23 18:58:36.019298 containerd[1564]: time="2026-01-23T18:58:36.017625885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 18:58:36.826010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165914787.mount: Deactivated successfully. Jan 23 18:58:38.849970 containerd[1564]: time="2026-01-23T18:58:38.849208730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:38.852354 containerd[1564]: time="2026-01-23T18:58:38.850845641Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068079" Jan 23 18:58:38.852354 containerd[1564]: time="2026-01-23T18:58:38.851977602Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:38.856247 containerd[1564]: time="2026-01-23T18:58:38.856154144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:38.857787 containerd[1564]: time="2026-01-23T18:58:38.857042054Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.838739818s" Jan 23 18:58:38.857787 containerd[1564]: time="2026-01-23T18:58:38.857164194Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 18:58:38.860513 containerd[1564]: time="2026-01-23T18:58:38.860484756Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 18:58:40.914549 containerd[1564]: time="2026-01-23T18:58:40.914428982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:40.916432 containerd[1564]: time="2026-01-23T18:58:40.916398223Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162446" Jan 23 18:58:40.917035 containerd[1564]: time="2026-01-23T18:58:40.916976283Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:40.921001 containerd[1564]: time="2026-01-23T18:58:40.920603275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:40.922811 containerd[1564]: time="2026-01-23T18:58:40.922763176Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.06224003s" Jan 23 18:58:40.922870 containerd[1564]: time="2026-01-23T18:58:40.922814176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 18:58:40.926130 containerd[1564]: time="2026-01-23T18:58:40.926097348Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 18:58:41.649659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:58:41.657338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:42.440493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:42.476582 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:42.976009 kubelet[2132]: E0123 18:58:42.974664 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:42.984672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:42.985158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:42.987278 systemd[1]: kubelet.service: Consumed 1.107s CPU time, 110.3M memory peak. Jan 23 18:58:43.155141 containerd[1564]: time="2026-01-23T18:58:43.154991092Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725933" Jan 23 18:58:43.156421 containerd[1564]: time="2026-01-23T18:58:43.156388762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:43.163109 containerd[1564]: time="2026-01-23T18:58:43.162464955Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:43.163429 containerd[1564]: time="2026-01-23T18:58:43.163395326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:43.164521 containerd[1564]: time="2026-01-23T18:58:43.164495666Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 2.238335368s" Jan 23 18:58:43.164684 containerd[1564]: time="2026-01-23T18:58:43.164662206Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 18:58:43.166530 containerd[1564]: time="2026-01-23T18:58:43.166472837Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 18:58:44.897568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911787085.mount: Deactivated successfully. Jan 23 18:58:46.023398 containerd[1564]: time="2026-01-23T18:58:46.023238235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:46.027670 containerd[1564]: time="2026-01-23T18:58:46.026619196Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965299" Jan 23 18:58:46.029039 containerd[1564]: time="2026-01-23T18:58:46.028805268Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:46.032782 containerd[1564]: time="2026-01-23T18:58:46.032725570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:46.034496 containerd[1564]: time="2026-01-23T18:58:46.033850830Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.867309523s" Jan 23 18:58:46.034932 containerd[1564]: time="2026-01-23T18:58:46.034770111Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 18:58:46.034824 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 18:58:46.043652 containerd[1564]: time="2026-01-23T18:58:46.043103035Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 18:58:46.587586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount623134819.mount: Deactivated successfully. Jan 23 18:58:49.086585 containerd[1564]: time="2026-01-23T18:58:49.086140565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:49.090501 containerd[1564]: time="2026-01-23T18:58:49.088240296Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Jan 23 18:58:49.092761 containerd[1564]: time="2026-01-23T18:58:49.092664608Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:49.094954 containerd[1564]: time="2026-01-23T18:58:49.094816059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:49.096303 containerd[1564]: time="2026-01-23T18:58:49.096069620Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.052912645s" Jan 23 18:58:49.096960 containerd[1564]: time="2026-01-23T18:58:49.096378140Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 18:58:49.100765 containerd[1564]: time="2026-01-23T18:58:49.100710382Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 18:58:49.639837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740168159.mount: Deactivated successfully. Jan 23 18:58:49.646464 containerd[1564]: time="2026-01-23T18:58:49.646386345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:49.649359 containerd[1564]: time="2026-01-23T18:58:49.649044316Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Jan 23 18:58:49.650053 containerd[1564]: time="2026-01-23T18:58:49.650011377Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:49.652516 containerd[1564]: time="2026-01-23T18:58:49.652467488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:49.653666 containerd[1564]: time="2026-01-23T18:58:49.653595529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 552.832057ms" Jan 23 18:58:49.653666 containerd[1564]: time="2026-01-23T18:58:49.653639909Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 18:58:49.654959 containerd[1564]: time="2026-01-23T18:58:49.654669759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 18:58:50.170075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332871326.mount: Deactivated successfully. Jan 23 18:58:53.161951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:58:53.171008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:58:53.799254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:58:53.826362 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:58:54.028925 kubelet[2267]: E0123 18:58:54.027850 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:58:54.032643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:58:54.033414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:58:54.034556 systemd[1]: kubelet.service: Consumed 701ms CPU time, 108.7M memory peak. Jan 23 18:58:56.154953 containerd[1564]: time="2026-01-23T18:58:56.153632706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:56.156804 containerd[1564]: time="2026-01-23T18:58:56.155844808Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166820" Jan 23 18:58:56.159238 containerd[1564]: time="2026-01-23T18:58:56.159154669Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:56.164408 containerd[1564]: time="2026-01-23T18:58:56.164333662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:56.166438 containerd[1564]: time="2026-01-23T18:58:56.166332533Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 6.511618134s" Jan 23 18:58:56.166606 containerd[1564]: time="2026-01-23T18:58:56.166576193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 18:58:57.740033 update_engine[1536]: I20260123 18:58:57.738452 1536 update_attempter.cc:509] Updating boot flags... Jan 23 18:59:00.025514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:59:00.026390 systemd[1]: kubelet.service: Consumed 701ms CPU time, 108.7M memory peak. Jan 23 18:59:00.033092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:59:00.076580 systemd[1]: Reload requested from client PID 2325 ('systemctl') (unit session-9.scope)... Jan 23 18:59:00.076639 systemd[1]: Reloading... Jan 23 18:59:00.287945 zram_generator::config[2371]: No configuration found. Jan 23 18:59:00.620017 systemd[1]: Reloading finished in 542 ms. Jan 23 18:59:00.689677 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:59:00.690161 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:59:00.690686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:59:00.690819 systemd[1]: kubelet.service: Consumed 546ms CPU time, 98.2M memory peak. Jan 23 18:59:00.693345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:59:00.902510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:59:00.913548 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:59:01.024759 kubelet[2422]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:59:01.024759 kubelet[2422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:59:01.025719 kubelet[2422]: I0123 18:59:01.024945 2422 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:59:01.493557 kubelet[2422]: I0123 18:59:01.493490 2422 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:59:01.493557 kubelet[2422]: I0123 18:59:01.493535 2422 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:59:01.493744 kubelet[2422]: I0123 18:59:01.493621 2422 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:59:01.493744 kubelet[2422]: I0123 18:59:01.493643 2422 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:59:01.493994 kubelet[2422]: I0123 18:59:01.493969 2422 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:59:01.501765 kubelet[2422]: I0123 18:59:01.501745 2422 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:59:01.502090 kubelet[2422]: E0123 18:59:01.502040 2422 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.168.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:59:01.517518 kubelet[2422]: I0123 18:59:01.517497 2422 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:59:01.530931 kubelet[2422]: I0123 18:59:01.530277 2422 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:59:01.530931 kubelet[2422]: I0123 18:59:01.530635 2422 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:59:01.530931 kubelet[2422]: I0123 18:59:01.530658 2422 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-168-154","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:59:01.530931 kubelet[2422]: I0123 18:59:01.530888 2422 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:59:01.531625 kubelet[2422]: I0123 18:59:01.531610 2422 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:59:01.531819 kubelet[2422]: I0123 18:59:01.531804 2422 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:59:01.534180 kubelet[2422]: I0123 18:59:01.534161 2422 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:59:01.534693 kubelet[2422]: I0123 18:59:01.534675 2422 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:59:01.534800 kubelet[2422]: I0123 18:59:01.534779 2422 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:59:01.534996 kubelet[2422]: I0123 18:59:01.534980 2422 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:59:01.535135 kubelet[2422]: I0123 18:59:01.535113 2422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:59:01.539272 kubelet[2422]: E0123 18:59:01.539244 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.168.154:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-168-154&limit=500&resourceVersion=0\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:59:01.539646 kubelet[2422]: E0123 18:59:01.539625 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.168.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:59:01.541955 kubelet[2422]: I0123 18:59:01.540362 2422 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:59:01.541955 kubelet[2422]: I0123 18:59:01.540917 2422 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:59:01.541955 kubelet[2422]: I0123 18:59:01.540943 2422 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:59:01.541955 kubelet[2422]: W0123 18:59:01.541040 2422 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:59:01.568709 kubelet[2422]: I0123 18:59:01.568674 2422 server.go:1262] "Started kubelet" Jan 23 18:59:01.571129 kubelet[2422]: I0123 18:59:01.571101 2422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:59:01.577428 kubelet[2422]: E0123 18:59:01.574285 2422 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.168.154:6443/api/v1/namespaces/default/events\": dial tcp 172.238.168.154:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-168-154.188d713e6daf9950 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-168-154,UID:172-238-168-154,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-168-154,},FirstTimestamp:2026-01-23 18:59:01.568608592 +0000 UTC m=+0.635526918,LastTimestamp:2026-01-23 18:59:01.568608592 +0000 UTC m=+0.635526918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-168-154,}" Jan 23 18:59:01.579064 kubelet[2422]: I0123 18:59:01.579022 2422 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:59:01.581591 kubelet[2422]: I0123 18:59:01.581560 2422 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:59:01.585106 kubelet[2422]: I0123 18:59:01.585071 2422 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:59:01.585771 kubelet[2422]: E0123 18:59:01.585739 2422 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-238-168-154\" not found" Jan 23 18:59:01.586272 kubelet[2422]: I0123 18:59:01.586238 2422 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:59:01.586449 kubelet[2422]: I0123 18:59:01.586346 2422 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:59:01.590775 kubelet[2422]: I0123 18:59:01.589571 2422 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:59:01.590775 kubelet[2422]: I0123 18:59:01.589670 2422 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:59:01.590775 kubelet[2422]: I0123 18:59:01.589979 2422 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:59:01.590775 kubelet[2422]: I0123 18:59:01.590625 2422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:59:01.597492 kubelet[2422]: E0123 18:59:01.597462 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.168.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:59:01.598642 kubelet[2422]: E0123 18:59:01.598604 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.168.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-168-154?timeout=10s\": dial tcp 172.238.168.154:6443: connect: connection refused" interval="200ms" Jan 23 18:59:01.599788 kubelet[2422]: E0123 18:59:01.599760 2422 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:59:01.600246 kubelet[2422]: I0123 18:59:01.600225 2422 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:59:01.600344 kubelet[2422]: I0123 18:59:01.600330 2422 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:59:01.600587 kubelet[2422]: I0123 18:59:01.600562 2422 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:59:01.653289 kubelet[2422]: I0123 18:59:01.653187 2422 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:59:01.655544 kubelet[2422]: I0123 18:59:01.655517 2422 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:59:01.655808 kubelet[2422]: I0123 18:59:01.655774 2422 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:59:01.656101 kubelet[2422]: I0123 18:59:01.656074 2422 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:59:01.656417 kubelet[2422]: E0123 18:59:01.656368 2422 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:59:01.661381 kubelet[2422]: E0123 18:59:01.660942 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.168.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:59:01.674487 kubelet[2422]: I0123 18:59:01.674451 2422 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:59:01.674924 kubelet[2422]: I0123 18:59:01.674849 2422 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:59:01.675053 kubelet[2422]: I0123 18:59:01.675034 2422 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:59:01.677011 kubelet[2422]: I0123 18:59:01.676985 2422 policy_none.go:49] "None policy: Start" Jan 23 18:59:01.677112 kubelet[2422]: I0123 18:59:01.677025 2422 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:59:01.677112 kubelet[2422]: I0123 18:59:01.677055 2422 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:59:01.678930 kubelet[2422]: I0123 18:59:01.677924 2422 policy_none.go:47] "Start" Jan 23 18:59:01.687145 kubelet[2422]: E0123 18:59:01.686284 2422 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-238-168-154\" not found" Jan 23 18:59:01.688918 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:59:01.714739 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:59:01.719846 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:59:01.728643 kubelet[2422]: E0123 18:59:01.728442 2422 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:59:01.729990 kubelet[2422]: I0123 18:59:01.729344 2422 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:59:01.729990 kubelet[2422]: I0123 18:59:01.729366 2422 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:59:01.729990 kubelet[2422]: I0123 18:59:01.729858 2422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:59:01.732734 kubelet[2422]: E0123 18:59:01.732691 2422 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:59:01.733162 kubelet[2422]: E0123 18:59:01.733142 2422 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-168-154\" not found" Jan 23 18:59:01.773629 systemd[1]: Created slice kubepods-burstable-pod380e07b09cd90c4f24a9770538155964.slice - libcontainer container kubepods-burstable-pod380e07b09cd90c4f24a9770538155964.slice. Jan 23 18:59:01.787788 kubelet[2422]: I0123 18:59:01.787744 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-flexvolume-dir\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:01.787788 kubelet[2422]: I0123 18:59:01.787786 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-k8s-certs\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:01.787788 kubelet[2422]: I0123 18:59:01.787807 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-kubeconfig\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:01.788220 kubelet[2422]: I0123 18:59:01.787824 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:01.788220 kubelet[2422]: I0123 18:59:01.787843 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ce838029d201cc1e28052cb6ec5b7fd-kubeconfig\") pod \"kube-scheduler-172-238-168-154\" (UID: \"7ce838029d201cc1e28052cb6ec5b7fd\") " pod="kube-system/kube-scheduler-172-238-168-154" Jan 23 18:59:01.788220 kubelet[2422]: I0123 18:59:01.787859 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/380e07b09cd90c4f24a9770538155964-ca-certs\") pod \"kube-apiserver-172-238-168-154\" (UID: \"380e07b09cd90c4f24a9770538155964\") " pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:01.788220 kubelet[2422]: I0123 18:59:01.787873 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/380e07b09cd90c4f24a9770538155964-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-168-154\" (UID: \"380e07b09cd90c4f24a9770538155964\") " pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:01.788220 kubelet[2422]: I0123 18:59:01.787889 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-ca-certs\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:01.788382 kubelet[2422]: I0123 18:59:01.787930 2422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/380e07b09cd90c4f24a9770538155964-k8s-certs\") pod \"kube-apiserver-172-238-168-154\" (UID: \"380e07b09cd90c4f24a9770538155964\") " pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:01.795720 kubelet[2422]: E0123 18:59:01.795686 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:01.800507 kubelet[2422]: E0123 18:59:01.800470 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.168.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-168-154?timeout=10s\": dial tcp 172.238.168.154:6443: connect: connection refused" interval="400ms" Jan 23 18:59:01.802240 systemd[1]: Created slice kubepods-burstable-pod211f039970eac1897076b0fc0d5b82c4.slice - libcontainer container kubepods-burstable-pod211f039970eac1897076b0fc0d5b82c4.slice. Jan 23 18:59:01.804754 kubelet[2422]: E0123 18:59:01.804732 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:01.807022 systemd[1]: Created slice kubepods-burstable-pod7ce838029d201cc1e28052cb6ec5b7fd.slice - libcontainer container kubepods-burstable-pod7ce838029d201cc1e28052cb6ec5b7fd.slice. Jan 23 18:59:01.809386 kubelet[2422]: E0123 18:59:01.809364 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:01.832315 kubelet[2422]: I0123 18:59:01.831825 2422 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-154" Jan 23 18:59:01.832315 kubelet[2422]: E0123 18:59:01.832255 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.168.154:6443/api/v1/nodes\": dial tcp 172.238.168.154:6443: connect: connection refused" node="172-238-168-154" Jan 23 18:59:02.035262 kubelet[2422]: I0123 18:59:02.035124 2422 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-154" Jan 23 18:59:02.035869 kubelet[2422]: E0123 18:59:02.035449 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.168.154:6443/api/v1/nodes\": dial tcp 172.238.168.154:6443: connect: connection refused" node="172-238-168-154" Jan 23 18:59:02.098521 kubelet[2422]: E0123 18:59:02.098473 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:02.100133 containerd[1564]: time="2026-01-23T18:59:02.099838438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-168-154,Uid:380e07b09cd90c4f24a9770538155964,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:02.107768 kubelet[2422]: E0123 18:59:02.107730 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:02.110450 containerd[1564]: time="2026-01-23T18:59:02.109858373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-168-154,Uid:211f039970eac1897076b0fc0d5b82c4,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:02.111533 kubelet[2422]: E0123 18:59:02.111332 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:02.112913 containerd[1564]: time="2026-01-23T18:59:02.112866154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-168-154,Uid:7ce838029d201cc1e28052cb6ec5b7fd,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:02.204742 kubelet[2422]: E0123 18:59:02.204697 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.168.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-168-154?timeout=10s\": dial tcp 172.238.168.154:6443: connect: connection refused" interval="800ms" Jan 23 18:59:02.382842 kubelet[2422]: E0123 18:59:02.382655 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.168.154:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-168-154&limit=500&resourceVersion=0\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:59:02.438435 kubelet[2422]: I0123 18:59:02.438371 2422 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-154" Jan 23 18:59:02.438975 kubelet[2422]: E0123 18:59:02.438888 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.168.154:6443/api/v1/nodes\": dial tcp 172.238.168.154:6443: connect: connection refused" node="172-238-168-154" Jan 23 18:59:02.493201 kubelet[2422]: E0123 18:59:02.493106 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.168.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:59:02.632541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094600856.mount: Deactivated successfully. Jan 23 18:59:02.636531 containerd[1564]: time="2026-01-23T18:59:02.636410156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:59:02.638136 containerd[1564]: time="2026-01-23T18:59:02.638074416Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:59:02.638830 containerd[1564]: time="2026-01-23T18:59:02.638794687Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:59:02.639843 containerd[1564]: time="2026-01-23T18:59:02.639791597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jan 23 18:59:02.641921 containerd[1564]: time="2026-01-23T18:59:02.640734198Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:59:02.641921 containerd[1564]: time="2026-01-23T18:59:02.641579108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:59:02.642092 containerd[1564]: time="2026-01-23T18:59:02.642061508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:59:02.643739 containerd[1564]: time="2026-01-23T18:59:02.643690619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:59:02.644665 containerd[1564]: time="2026-01-23T18:59:02.644608930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 530.177385ms" Jan 23 18:59:02.646937 containerd[1564]: time="2026-01-23T18:59:02.646880801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 542.162551ms" Jan 23 18:59:02.650218 containerd[1564]: time="2026-01-23T18:59:02.650179352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 537.911338ms" Jan 23 18:59:02.670765 kubelet[2422]: E0123 18:59:02.670713 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.168.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:59:02.749320 containerd[1564]: time="2026-01-23T18:59:02.749241122Z" level=info msg="connecting to shim 5f2d0ed0fc698ef1bbe79ddb1d8122d8b2532176047b5aa17f69d932b3cfad76" address="unix:///run/containerd/s/065a8dcc6e335a84e93488b5dd411add383feff5a2a34ec29d43e1221a2a332d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:02.764162 containerd[1564]: time="2026-01-23T18:59:02.764077729Z" level=info msg="connecting to shim 63c583ddd763982fa39dbb850325d772f766e4ab494c62d6471d0c58fa89edf6" address="unix:///run/containerd/s/ad5117b4a8c5e9f9f35cba77481a54bd37ab6eabb457b4c5970b0a1d799084c7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:02.768086 containerd[1564]: time="2026-01-23T18:59:02.767723651Z" level=info msg="connecting to shim 01073f84db9b005e2f6bdd851c483ec63aef27f17999d57304f2c2cbd79cac06" address="unix:///run/containerd/s/63d7fec8a1ee02b50011a7d98c9bdf80986f0d9d0ce40baf22eda5d1f2dcbdc9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:02.853408 systemd[1]: Started cri-containerd-5f2d0ed0fc698ef1bbe79ddb1d8122d8b2532176047b5aa17f69d932b3cfad76.scope - libcontainer container 5f2d0ed0fc698ef1bbe79ddb1d8122d8b2532176047b5aa17f69d932b3cfad76. Jan 23 18:59:02.892170 systemd[1]: Started cri-containerd-63c583ddd763982fa39dbb850325d772f766e4ab494c62d6471d0c58fa89edf6.scope - libcontainer container 63c583ddd763982fa39dbb850325d772f766e4ab494c62d6471d0c58fa89edf6. Jan 23 18:59:03.043969 kubelet[2422]: E0123 18:59:03.036690 2422 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.168.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-168-154?timeout=10s\": dial tcp 172.238.168.154:6443: connect: connection refused" interval="1.6s" Jan 23 18:59:03.170514 containerd[1564]: time="2026-01-23T18:59:03.170373502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-168-154,Uid:7ce838029d201cc1e28052cb6ec5b7fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f2d0ed0fc698ef1bbe79ddb1d8122d8b2532176047b5aa17f69d932b3cfad76\"" Jan 23 18:59:03.173089 systemd[1]: Started cri-containerd-01073f84db9b005e2f6bdd851c483ec63aef27f17999d57304f2c2cbd79cac06.scope - libcontainer container 01073f84db9b005e2f6bdd851c483ec63aef27f17999d57304f2c2cbd79cac06. Jan 23 18:59:03.173665 kubelet[2422]: E0123 18:59:03.173256 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:03.179388 containerd[1564]: time="2026-01-23T18:59:03.179280307Z" level=info msg="CreateContainer within sandbox \"5f2d0ed0fc698ef1bbe79ddb1d8122d8b2532176047b5aa17f69d932b3cfad76\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:59:03.206628 containerd[1564]: time="2026-01-23T18:59:03.206570160Z" level=info msg="Container ffeb1bd0ac603552b21fec2bf88698b8b29d15e5d93d3790f205d3e03f5d8d24: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:03.217763 containerd[1564]: time="2026-01-23T18:59:03.217644856Z" level=info msg="CreateContainer within sandbox \"5f2d0ed0fc698ef1bbe79ddb1d8122d8b2532176047b5aa17f69d932b3cfad76\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ffeb1bd0ac603552b21fec2bf88698b8b29d15e5d93d3790f205d3e03f5d8d24\"" Jan 23 18:59:03.218974 containerd[1564]: time="2026-01-23T18:59:03.218703537Z" level=info msg="StartContainer for \"ffeb1bd0ac603552b21fec2bf88698b8b29d15e5d93d3790f205d3e03f5d8d24\"" Jan 23 18:59:03.220595 containerd[1564]: time="2026-01-23T18:59:03.220548247Z" level=info msg="connecting to shim ffeb1bd0ac603552b21fec2bf88698b8b29d15e5d93d3790f205d3e03f5d8d24" address="unix:///run/containerd/s/065a8dcc6e335a84e93488b5dd411add383feff5a2a34ec29d43e1221a2a332d" protocol=ttrpc version=3 Jan 23 18:59:03.243569 kubelet[2422]: I0123 18:59:03.243533 2422 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-154" Jan 23 18:59:03.244390 kubelet[2422]: E0123 18:59:03.244230 2422 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.168.154:6443/api/v1/nodes\": dial tcp 172.238.168.154:6443: connect: connection refused" node="172-238-168-154" Jan 23 18:59:03.253236 kubelet[2422]: E0123 18:59:03.253152 2422 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.168.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:59:03.256757 containerd[1564]: time="2026-01-23T18:59:03.256724926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-168-154,Uid:380e07b09cd90c4f24a9770538155964,Namespace:kube-system,Attempt:0,} returns sandbox id \"63c583ddd763982fa39dbb850325d772f766e4ab494c62d6471d0c58fa89edf6\"" Jan 23 18:59:03.263663 kubelet[2422]: E0123 18:59:03.263593 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:03.266786 containerd[1564]: time="2026-01-23T18:59:03.266753151Z" level=info msg="CreateContainer within sandbox \"63c583ddd763982fa39dbb850325d772f766e4ab494c62d6471d0c58fa89edf6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:59:03.276778 containerd[1564]: time="2026-01-23T18:59:03.276589915Z" level=info msg="Container a4ed52ecbed29b06863d0aa64447ad412fbb2f80ee3aa700853856769f5b691a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:03.281184 systemd[1]: Started cri-containerd-ffeb1bd0ac603552b21fec2bf88698b8b29d15e5d93d3790f205d3e03f5d8d24.scope - libcontainer container ffeb1bd0ac603552b21fec2bf88698b8b29d15e5d93d3790f205d3e03f5d8d24. Jan 23 18:59:03.282935 containerd[1564]: time="2026-01-23T18:59:03.282880779Z" level=info msg="CreateContainer within sandbox \"63c583ddd763982fa39dbb850325d772f766e4ab494c62d6471d0c58fa89edf6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a4ed52ecbed29b06863d0aa64447ad412fbb2f80ee3aa700853856769f5b691a\"" Jan 23 18:59:03.284894 containerd[1564]: time="2026-01-23T18:59:03.284865930Z" level=info msg="StartContainer for \"a4ed52ecbed29b06863d0aa64447ad412fbb2f80ee3aa700853856769f5b691a\"" Jan 23 18:59:03.287987 containerd[1564]: time="2026-01-23T18:59:03.287845041Z" level=info msg="connecting to shim a4ed52ecbed29b06863d0aa64447ad412fbb2f80ee3aa700853856769f5b691a" address="unix:///run/containerd/s/ad5117b4a8c5e9f9f35cba77481a54bd37ab6eabb457b4c5970b0a1d799084c7" protocol=ttrpc version=3 Jan 23 18:59:03.355616 containerd[1564]: time="2026-01-23T18:59:03.355508715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-168-154,Uid:211f039970eac1897076b0fc0d5b82c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"01073f84db9b005e2f6bdd851c483ec63aef27f17999d57304f2c2cbd79cac06\"" Jan 23 18:59:03.363563 kubelet[2422]: E0123 18:59:03.363408 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:03.368227 containerd[1564]: time="2026-01-23T18:59:03.368186501Z" level=info msg="CreateContainer within sandbox \"01073f84db9b005e2f6bdd851c483ec63aef27f17999d57304f2c2cbd79cac06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:59:03.373092 systemd[1]: Started cri-containerd-a4ed52ecbed29b06863d0aa64447ad412fbb2f80ee3aa700853856769f5b691a.scope - libcontainer container a4ed52ecbed29b06863d0aa64447ad412fbb2f80ee3aa700853856769f5b691a. Jan 23 18:59:03.376541 containerd[1564]: time="2026-01-23T18:59:03.376167515Z" level=info msg="Container 55802eaceb1e9c9cba0a7d13b54acdfec88abbba557f354fac37002e3589cec2: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:03.385838 containerd[1564]: time="2026-01-23T18:59:03.384823980Z" level=info msg="CreateContainer within sandbox \"01073f84db9b005e2f6bdd851c483ec63aef27f17999d57304f2c2cbd79cac06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55802eaceb1e9c9cba0a7d13b54acdfec88abbba557f354fac37002e3589cec2\"" Jan 23 18:59:03.386854 containerd[1564]: time="2026-01-23T18:59:03.386783511Z" level=info msg="StartContainer for \"55802eaceb1e9c9cba0a7d13b54acdfec88abbba557f354fac37002e3589cec2\"" Jan 23 18:59:03.389234 containerd[1564]: time="2026-01-23T18:59:03.388973992Z" level=info msg="connecting to shim 55802eaceb1e9c9cba0a7d13b54acdfec88abbba557f354fac37002e3589cec2" address="unix:///run/containerd/s/63d7fec8a1ee02b50011a7d98c9bdf80986f0d9d0ce40baf22eda5d1f2dcbdc9" protocol=ttrpc version=3 Jan 23 18:59:03.449789 systemd[1]: Started cri-containerd-55802eaceb1e9c9cba0a7d13b54acdfec88abbba557f354fac37002e3589cec2.scope - libcontainer container 55802eaceb1e9c9cba0a7d13b54acdfec88abbba557f354fac37002e3589cec2. Jan 23 18:59:03.513194 containerd[1564]: time="2026-01-23T18:59:03.513131364Z" level=info msg="StartContainer for \"a4ed52ecbed29b06863d0aa64447ad412fbb2f80ee3aa700853856769f5b691a\" returns successfully" Jan 23 18:59:03.544759 containerd[1564]: time="2026-01-23T18:59:03.544709929Z" level=info msg="StartContainer for \"ffeb1bd0ac603552b21fec2bf88698b8b29d15e5d93d3790f205d3e03f5d8d24\" returns successfully" Jan 23 18:59:03.561283 kubelet[2422]: E0123 18:59:03.561070 2422 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.168.154:6443/api/v1/namespaces/default/events\": dial tcp 172.238.168.154:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-168-154.188d713e6daf9950 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-168-154,UID:172-238-168-154,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-168-154,},FirstTimestamp:2026-01-23 18:59:01.568608592 +0000 UTC m=+0.635526918,LastTimestamp:2026-01-23 18:59:01.568608592 +0000 UTC m=+0.635526918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-168-154,}" Jan 23 18:59:03.591830 kubelet[2422]: E0123 18:59:03.591745 2422 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.168.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.168.154:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:59:03.639316 containerd[1564]: time="2026-01-23T18:59:03.639279837Z" level=info msg="StartContainer for \"55802eaceb1e9c9cba0a7d13b54acdfec88abbba557f354fac37002e3589cec2\" returns successfully" Jan 23 18:59:03.698028 kubelet[2422]: E0123 18:59:03.697988 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:03.699427 kubelet[2422]: E0123 18:59:03.699396 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:03.700241 kubelet[2422]: E0123 18:59:03.700203 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:03.701551 kubelet[2422]: E0123 18:59:03.701286 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:03.701551 kubelet[2422]: E0123 18:59:03.701435 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:03.701718 kubelet[2422]: E0123 18:59:03.701693 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:04.701882 kubelet[2422]: E0123 18:59:04.701823 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:04.704056 kubelet[2422]: E0123 18:59:04.702470 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:04.704056 kubelet[2422]: E0123 18:59:04.702618 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:04.704056 kubelet[2422]: E0123 18:59:04.702871 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:04.848737 kubelet[2422]: I0123 18:59:04.848650 2422 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-154" Jan 23 18:59:05.712777 kubelet[2422]: E0123 18:59:05.712647 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:05.715413 kubelet[2422]: E0123 18:59:05.715072 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:07.176377 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1278850929 wd_nsec: 1278851039 Jan 23 18:59:07.182521 kubelet[2422]: E0123 18:59:07.182478 2422 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:07.183352 kubelet[2422]: E0123 18:59:07.182652 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:09.102130 kubelet[2422]: E0123 18:59:09.101979 2422 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-238-168-154\" not found" node="172-238-168-154" Jan 23 18:59:09.153108 kubelet[2422]: I0123 18:59:09.153058 2422 kubelet_node_status.go:78] "Successfully registered node" node="172-238-168-154" Jan 23 18:59:09.153297 kubelet[2422]: E0123 18:59:09.153144 2422 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-238-168-154\": node \"172-238-168-154\" not found" Jan 23 18:59:09.176222 kubelet[2422]: I0123 18:59:09.176166 2422 apiserver.go:52] "Watching apiserver" Jan 23 18:59:09.186743 kubelet[2422]: I0123 18:59:09.186700 2422 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:59:09.186865 kubelet[2422]: I0123 18:59:09.186777 2422 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:09.192815 kubelet[2422]: E0123 18:59:09.192776 2422 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-168-154\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:09.192815 kubelet[2422]: I0123 18:59:09.192797 2422 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:09.194448 kubelet[2422]: E0123 18:59:09.194409 2422 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-168-154\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:09.194448 kubelet[2422]: I0123 18:59:09.194440 2422 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-168-154" Jan 23 18:59:09.195919 kubelet[2422]: E0123 18:59:09.195864 2422 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-168-154\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-168-154" Jan 23 18:59:11.156582 kubelet[2422]: I0123 18:59:11.156374 2422 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-168-154" Jan 23 18:59:11.193751 kubelet[2422]: E0123 18:59:11.193624 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:11.201922 kubelet[2422]: E0123 18:59:11.201657 2422 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:11.432251 systemd[1]: Reload requested from client PID 2704 ('systemctl') (unit session-9.scope)... Jan 23 18:59:11.433412 systemd[1]: Reloading... Jan 23 18:59:11.690512 zram_generator::config[2743]: No configuration found. Jan 23 18:59:12.332159 systemd[1]: Reloading finished in 897 ms. Jan 23 18:59:12.365444 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:59:12.381156 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:59:12.381743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:59:12.382079 systemd[1]: kubelet.service: Consumed 1.448s CPU time, 125.8M memory peak. Jan 23 18:59:12.387073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:59:12.794460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:59:12.806609 (kubelet)[2800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:59:12.912840 kubelet[2800]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:59:12.912840 kubelet[2800]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:59:12.913314 kubelet[2800]: I0123 18:59:12.912930 2800 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:59:12.923506 kubelet[2800]: I0123 18:59:12.923191 2800 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:59:12.923506 kubelet[2800]: I0123 18:59:12.923220 2800 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:59:12.923506 kubelet[2800]: I0123 18:59:12.923286 2800 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:59:12.923506 kubelet[2800]: I0123 18:59:12.923299 2800 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:59:12.923762 kubelet[2800]: I0123 18:59:12.923682 2800 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:59:12.925474 kubelet[2800]: I0123 18:59:12.925428 2800 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:59:12.929011 kubelet[2800]: I0123 18:59:12.927820 2800 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:59:12.943475 kubelet[2800]: I0123 18:59:12.942129 2800 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:59:12.949674 kubelet[2800]: I0123 18:59:12.949525 2800 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:59:12.949994 kubelet[2800]: I0123 18:59:12.949948 2800 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:59:12.951088 kubelet[2800]: I0123 18:59:12.949983 2800 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-168-154","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:59:12.951088 kubelet[2800]: I0123 18:59:12.950241 2800 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:59:12.951088 kubelet[2800]: I0123 18:59:12.950251 2800 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:59:12.951088 kubelet[2800]: I0123 18:59:12.950297 2800 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:59:12.951088 kubelet[2800]: I0123 18:59:12.951075 2800 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:59:12.952432 kubelet[2800]: I0123 18:59:12.951624 2800 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:59:12.952432 kubelet[2800]: I0123 18:59:12.951671 2800 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:59:12.952432 kubelet[2800]: I0123 18:59:12.951702 2800 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:59:12.952432 kubelet[2800]: I0123 18:59:12.951744 2800 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:59:12.959247 kubelet[2800]: I0123 18:59:12.959013 2800 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:59:12.962530 kubelet[2800]: I0123 18:59:12.962048 2800 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:59:12.962530 kubelet[2800]: I0123 18:59:12.962082 2800 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:59:12.972985 kubelet[2800]: I0123 18:59:12.971215 2800 server.go:1262] "Started kubelet" Jan 23 18:59:12.973512 kubelet[2800]: I0123 18:59:12.973374 2800 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:59:12.974055 kubelet[2800]: I0123 18:59:12.973994 2800 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:59:12.974101 kubelet[2800]: I0123 18:59:12.974070 2800 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:59:12.980670 kubelet[2800]: I0123 18:59:12.980269 2800 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:59:12.988789 kubelet[2800]: I0123 18:59:12.988654 2800 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:59:12.990241 kubelet[2800]: I0123 18:59:12.989411 2800 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:59:12.991786 kubelet[2800]: I0123 18:59:12.991138 2800 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:59:12.992131 kubelet[2800]: I0123 18:59:12.991891 2800 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:59:12.993109 kubelet[2800]: I0123 18:59:12.993066 2800 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:59:12.993269 kubelet[2800]: I0123 18:59:12.993253 2800 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:59:12.995495 kubelet[2800]: E0123 18:59:12.995147 2800 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:59:13.003481 kubelet[2800]: I0123 18:59:13.001451 2800 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:59:13.003481 kubelet[2800]: I0123 18:59:13.002954 2800 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:59:13.013064 kubelet[2800]: I0123 18:59:13.013031 2800 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:59:13.057065 kubelet[2800]: I0123 18:59:13.056788 2800 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:59:13.075939 kubelet[2800]: I0123 18:59:13.075703 2800 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:59:13.075939 kubelet[2800]: I0123 18:59:13.075766 2800 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:59:13.075939 kubelet[2800]: I0123 18:59:13.075810 2800 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:59:13.075939 kubelet[2800]: E0123 18:59:13.075891 2800 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128069 2800 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128088 2800 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128114 2800 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128257 2800 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128273 2800 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128299 2800 policy_none.go:49] "None policy: Start" Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128325 2800 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128344 2800 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128462 2800 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 18:59:13.128873 kubelet[2800]: I0123 18:59:13.128485 2800 policy_none.go:47] "Start" Jan 23 18:59:13.139532 kubelet[2800]: E0123 18:59:13.139510 2800 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:59:13.140099 kubelet[2800]: I0123 18:59:13.140083 2800 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:59:13.141379 kubelet[2800]: I0123 18:59:13.141341 2800 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:59:13.143551 kubelet[2800]: I0123 18:59:13.143535 2800 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:59:13.146564 kubelet[2800]: E0123 18:59:13.146440 2800 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:59:13.186482 kubelet[2800]: I0123 18:59:13.186423 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-168-154" Jan 23 18:59:13.187294 kubelet[2800]: I0123 18:59:13.187207 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:13.188284 kubelet[2800]: I0123 18:59:13.188266 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:13.194412 kubelet[2800]: I0123 18:59:13.194332 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/380e07b09cd90c4f24a9770538155964-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-168-154\" (UID: \"380e07b09cd90c4f24a9770538155964\") " pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:13.194412 kubelet[2800]: I0123 18:59:13.194362 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ce838029d201cc1e28052cb6ec5b7fd-kubeconfig\") pod \"kube-scheduler-172-238-168-154\" (UID: \"7ce838029d201cc1e28052cb6ec5b7fd\") " pod="kube-system/kube-scheduler-172-238-168-154" Jan 23 18:59:13.194412 kubelet[2800]: I0123 18:59:13.194386 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/380e07b09cd90c4f24a9770538155964-ca-certs\") pod \"kube-apiserver-172-238-168-154\" (UID: \"380e07b09cd90c4f24a9770538155964\") " pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:13.194803 kubelet[2800]: I0123 18:59:13.194687 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/380e07b09cd90c4f24a9770538155964-k8s-certs\") pod \"kube-apiserver-172-238-168-154\" (UID: \"380e07b09cd90c4f24a9770538155964\") " pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:13.195324 kubelet[2800]: I0123 18:59:13.194711 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-ca-certs\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:13.197872 kubelet[2800]: I0123 18:59:13.196417 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-flexvolume-dir\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:13.198112 kubelet[2800]: I0123 18:59:13.198071 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-k8s-certs\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:13.198151 kubelet[2800]: I0123 18:59:13.198115 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-kubeconfig\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:13.198151 kubelet[2800]: I0123 18:59:13.198133 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/211f039970eac1897076b0fc0d5b82c4-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-168-154\" (UID: \"211f039970eac1897076b0fc0d5b82c4\") " pod="kube-system/kube-controller-manager-172-238-168-154" Jan 23 18:59:13.200961 kubelet[2800]: E0123 18:59:13.200740 2800 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-168-154\" already exists" pod="kube-system/kube-scheduler-172-238-168-154" Jan 23 18:59:13.273722 kubelet[2800]: I0123 18:59:13.273669 2800 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-154" Jan 23 18:59:13.284634 kubelet[2800]: I0123 18:59:13.284550 2800 kubelet_node_status.go:124] "Node was previously registered" node="172-238-168-154" Jan 23 18:59:13.285487 kubelet[2800]: I0123 18:59:13.285472 2800 kubelet_node_status.go:78] "Successfully registered node" node="172-238-168-154" Jan 23 18:59:13.502643 kubelet[2800]: E0123 18:59:13.502279 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:13.502643 kubelet[2800]: E0123 18:59:13.502379 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:13.502643 kubelet[2800]: E0123 18:59:13.502557 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:13.956564 kubelet[2800]: I0123 18:59:13.955200 2800 apiserver.go:52] "Watching apiserver" Jan 23 18:59:13.993563 kubelet[2800]: I0123 18:59:13.993497 2800 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:59:14.119545 kubelet[2800]: I0123 18:59:14.119507 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-168-154" Jan 23 18:59:14.120540 kubelet[2800]: E0123 18:59:14.120508 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:14.134009 kubelet[2800]: I0123 18:59:14.120715 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:14.134009 kubelet[2800]: E0123 18:59:14.132286 2800 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-168-154\" already exists" pod="kube-system/kube-scheduler-172-238-168-154" Jan 23 18:59:14.134009 kubelet[2800]: E0123 18:59:14.134000 2800 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-168-154\" already exists" pod="kube-system/kube-apiserver-172-238-168-154" Jan 23 18:59:14.134209 kubelet[2800]: E0123 18:59:14.134127 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:14.134278 kubelet[2800]: E0123 18:59:14.134262 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:14.162132 kubelet[2800]: I0123 18:59:14.162048 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-168-154" podStartSLOduration=1.16201596 podStartE2EDuration="1.16201596s" podCreationTimestamp="2026-01-23 18:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:59:14.145795882 +0000 UTC m=+1.324690643" watchObservedRunningTime="2026-01-23 18:59:14.16201596 +0000 UTC m=+1.340910721" Jan 23 18:59:14.172933 kubelet[2800]: I0123 18:59:14.172862 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-168-154" podStartSLOduration=1.172847572 podStartE2EDuration="1.172847572s" podCreationTimestamp="2026-01-23 18:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:59:14.162524047 +0000 UTC m=+1.341418808" watchObservedRunningTime="2026-01-23 18:59:14.172847572 +0000 UTC m=+1.351742333" Jan 23 18:59:15.121933 kubelet[2800]: E0123 18:59:15.121857 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:15.122799 kubelet[2800]: E0123 18:59:15.121873 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:16.129364 kubelet[2800]: E0123 18:59:16.129293 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:16.453640 kubelet[2800]: I0123 18:59:16.453593 2800 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:59:16.455208 containerd[1564]: time="2026-01-23T18:59:16.455102846Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:59:16.456869 kubelet[2800]: I0123 18:59:16.455414 2800 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:59:17.301708 systemd[1]: Created slice kubepods-besteffort-podb1607301_84ea_444a_b42e_44a182280d1e.slice - libcontainer container kubepods-besteffort-podb1607301_84ea_444a_b42e_44a182280d1e.slice. Jan 23 18:59:17.356518 kubelet[2800]: I0123 18:59:17.356468 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1607301-84ea-444a-b42e-44a182280d1e-kube-proxy\") pod \"kube-proxy-88s7w\" (UID: \"b1607301-84ea-444a-b42e-44a182280d1e\") " pod="kube-system/kube-proxy-88s7w" Jan 23 18:59:17.356518 kubelet[2800]: I0123 18:59:17.356517 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1607301-84ea-444a-b42e-44a182280d1e-xtables-lock\") pod \"kube-proxy-88s7w\" (UID: \"b1607301-84ea-444a-b42e-44a182280d1e\") " pod="kube-system/kube-proxy-88s7w" Jan 23 18:59:17.356518 kubelet[2800]: I0123 18:59:17.356537 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1607301-84ea-444a-b42e-44a182280d1e-lib-modules\") pod \"kube-proxy-88s7w\" (UID: \"b1607301-84ea-444a-b42e-44a182280d1e\") " pod="kube-system/kube-proxy-88s7w" Jan 23 18:59:17.357274 kubelet[2800]: I0123 18:59:17.356553 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89424\" (UniqueName: \"kubernetes.io/projected/b1607301-84ea-444a-b42e-44a182280d1e-kube-api-access-89424\") pod \"kube-proxy-88s7w\" (UID: \"b1607301-84ea-444a-b42e-44a182280d1e\") " pod="kube-system/kube-proxy-88s7w" Jan 23 18:59:17.457490 kubelet[2800]: I0123 18:59:17.456712 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44t6q\" (UniqueName: \"kubernetes.io/projected/63f74f70-e0df-4d61-8030-1e28aa727981-kube-api-access-44t6q\") pod \"tigera-operator-65cdcdfd6d-hthk6\" (UID: \"63f74f70-e0df-4d61-8030-1e28aa727981\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hthk6" Jan 23 18:59:17.457490 kubelet[2800]: I0123 18:59:17.456763 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63f74f70-e0df-4d61-8030-1e28aa727981-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-hthk6\" (UID: \"63f74f70-e0df-4d61-8030-1e28aa727981\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hthk6" Jan 23 18:59:17.461771 systemd[1]: Created slice kubepods-besteffort-pod63f74f70_e0df_4d61_8030_1e28aa727981.slice - libcontainer container kubepods-besteffort-pod63f74f70_e0df_4d61_8030_1e28aa727981.slice. Jan 23 18:59:17.613660 kubelet[2800]: E0123 18:59:17.613446 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:17.616354 containerd[1564]: time="2026-01-23T18:59:17.616179536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-88s7w,Uid:b1607301-84ea-444a-b42e-44a182280d1e,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:17.733923 containerd[1564]: time="2026-01-23T18:59:17.733703855Z" level=info msg="connecting to shim a82c6579ebc067f7436cc004988710d99cc9ec7c8c09a104d949760f3d899a0d" address="unix:///run/containerd/s/b6adb4674ba2fc34c87ce25f9e158bca810d91999c5b6b6ce1c7c9be8dd805dd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:17.781381 containerd[1564]: time="2026-01-23T18:59:17.780373266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hthk6,Uid:63f74f70-e0df-4d61-8030-1e28aa727981,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:59:17.848797 containerd[1564]: time="2026-01-23T18:59:17.848735947Z" level=info msg="connecting to shim 81b9717adb779902febce1b475fdc925cfeb7ee44bb42e75ba2988c85af4e80f" address="unix:///run/containerd/s/a4c2117b260949266ebeb74ceeec7d78eee09b1f43215d5bf58418c933f6927d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:17.897683 systemd[1]: Started cri-containerd-a82c6579ebc067f7436cc004988710d99cc9ec7c8c09a104d949760f3d899a0d.scope - libcontainer container a82c6579ebc067f7436cc004988710d99cc9ec7c8c09a104d949760f3d899a0d. Jan 23 18:59:17.912893 systemd[1]: Started cri-containerd-81b9717adb779902febce1b475fdc925cfeb7ee44bb42e75ba2988c85af4e80f.scope - libcontainer container 81b9717adb779902febce1b475fdc925cfeb7ee44bb42e75ba2988c85af4e80f. Jan 23 18:59:17.968810 containerd[1564]: time="2026-01-23T18:59:17.968763031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-88s7w,Uid:b1607301-84ea-444a-b42e-44a182280d1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a82c6579ebc067f7436cc004988710d99cc9ec7c8c09a104d949760f3d899a0d\"" Jan 23 18:59:17.969610 kubelet[2800]: E0123 18:59:17.969585 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:17.976274 containerd[1564]: time="2026-01-23T18:59:17.976233327Z" level=info msg="CreateContainer within sandbox \"a82c6579ebc067f7436cc004988710d99cc9ec7c8c09a104d949760f3d899a0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:59:17.987815 containerd[1564]: time="2026-01-23T18:59:17.987762414Z" level=info msg="Container a567471b5fdbd6eb9348d4b869b98530fc1c38c3bd634c2f2dc1d373bdc11499: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:17.993782 containerd[1564]: time="2026-01-23T18:59:17.993742634Z" level=info msg="CreateContainer within sandbox \"a82c6579ebc067f7436cc004988710d99cc9ec7c8c09a104d949760f3d899a0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a567471b5fdbd6eb9348d4b869b98530fc1c38c3bd634c2f2dc1d373bdc11499\"" Jan 23 18:59:17.995011 containerd[1564]: time="2026-01-23T18:59:17.994238989Z" level=info msg="StartContainer for \"a567471b5fdbd6eb9348d4b869b98530fc1c38c3bd634c2f2dc1d373bdc11499\"" Jan 23 18:59:17.996304 containerd[1564]: time="2026-01-23T18:59:17.996275760Z" level=info msg="connecting to shim a567471b5fdbd6eb9348d4b869b98530fc1c38c3bd634c2f2dc1d373bdc11499" address="unix:///run/containerd/s/b6adb4674ba2fc34c87ce25f9e158bca810d91999c5b6b6ce1c7c9be8dd805dd" protocol=ttrpc version=3 Jan 23 18:59:18.032079 systemd[1]: Started cri-containerd-a567471b5fdbd6eb9348d4b869b98530fc1c38c3bd634c2f2dc1d373bdc11499.scope - libcontainer container a567471b5fdbd6eb9348d4b869b98530fc1c38c3bd634c2f2dc1d373bdc11499. Jan 23 18:59:18.078198 containerd[1564]: time="2026-01-23T18:59:18.078098971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hthk6,Uid:63f74f70-e0df-4d61-8030-1e28aa727981,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"81b9717adb779902febce1b475fdc925cfeb7ee44bb42e75ba2988c85af4e80f\"" Jan 23 18:59:18.081712 containerd[1564]: time="2026-01-23T18:59:18.080797636Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:59:18.133589 containerd[1564]: time="2026-01-23T18:59:18.133545678Z" level=info msg="StartContainer for \"a567471b5fdbd6eb9348d4b869b98530fc1c38c3bd634c2f2dc1d373bdc11499\" returns successfully" Jan 23 18:59:18.288007 kubelet[2800]: E0123 18:59:18.286877 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:19.139824 kubelet[2800]: E0123 18:59:19.139182 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:19.140640 kubelet[2800]: E0123 18:59:19.140598 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:19.615750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569987906.mount: Deactivated successfully. Jan 23 18:59:21.465797 containerd[1564]: time="2026-01-23T18:59:21.465718382Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:21.466922 containerd[1564]: time="2026-01-23T18:59:21.466709809Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 18:59:21.467433 containerd[1564]: time="2026-01-23T18:59:21.467401445Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:21.469733 containerd[1564]: time="2026-01-23T18:59:21.469698343Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:21.471495 containerd[1564]: time="2026-01-23T18:59:21.470884912Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.388712363s" Jan 23 18:59:21.471495 containerd[1564]: time="2026-01-23T18:59:21.470942082Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:59:21.477174 containerd[1564]: time="2026-01-23T18:59:21.477134431Z" level=info msg="CreateContainer within sandbox \"81b9717adb779902febce1b475fdc925cfeb7ee44bb42e75ba2988c85af4e80f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:59:21.486704 containerd[1564]: time="2026-01-23T18:59:21.483796225Z" level=info msg="Container fb2506868824808890c0b1a4c218d6b1f941513b970e4099e2172a90cb76ed12: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:21.495999 containerd[1564]: time="2026-01-23T18:59:21.495968981Z" level=info msg="CreateContainer within sandbox \"81b9717adb779902febce1b475fdc925cfeb7ee44bb42e75ba2988c85af4e80f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fb2506868824808890c0b1a4c218d6b1f941513b970e4099e2172a90cb76ed12\"" Jan 23 18:59:21.497953 containerd[1564]: time="2026-01-23T18:59:21.496926298Z" level=info msg="StartContainer for \"fb2506868824808890c0b1a4c218d6b1f941513b970e4099e2172a90cb76ed12\"" Jan 23 18:59:21.499109 containerd[1564]: time="2026-01-23T18:59:21.499081015Z" level=info msg="connecting to shim fb2506868824808890c0b1a4c218d6b1f941513b970e4099e2172a90cb76ed12" address="unix:///run/containerd/s/a4c2117b260949266ebeb74ceeec7d78eee09b1f43215d5bf58418c933f6927d" protocol=ttrpc version=3 Jan 23 18:59:21.605168 systemd[1]: Started cri-containerd-fb2506868824808890c0b1a4c218d6b1f941513b970e4099e2172a90cb76ed12.scope - libcontainer container fb2506868824808890c0b1a4c218d6b1f941513b970e4099e2172a90cb76ed12. Jan 23 18:59:21.705252 containerd[1564]: time="2026-01-23T18:59:21.705187279Z" level=info msg="StartContainer for \"fb2506868824808890c0b1a4c218d6b1f941513b970e4099e2172a90cb76ed12\" returns successfully" Jan 23 18:59:22.152323 kubelet[2800]: E0123 18:59:22.151671 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:22.165592 kubelet[2800]: I0123 18:59:22.165471 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-88s7w" podStartSLOduration=5.165414761 podStartE2EDuration="5.165414761s" podCreationTimestamp="2026-01-23 18:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:59:19.232062997 +0000 UTC m=+6.410957768" watchObservedRunningTime="2026-01-23 18:59:22.165414761 +0000 UTC m=+9.344309532" Jan 23 18:59:22.165939 kubelet[2800]: I0123 18:59:22.165643 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-hthk6" podStartSLOduration=1.773611071 podStartE2EDuration="5.165635323s" podCreationTimestamp="2026-01-23 18:59:17 +0000 UTC" firstStartedPulling="2026-01-23 18:59:18.079983289 +0000 UTC m=+5.258878050" lastFinishedPulling="2026-01-23 18:59:21.472007541 +0000 UTC m=+8.650902302" observedRunningTime="2026-01-23 18:59:22.164890857 +0000 UTC m=+9.343785618" watchObservedRunningTime="2026-01-23 18:59:22.165635323 +0000 UTC m=+9.344530084" Jan 23 18:59:23.160851 kubelet[2800]: E0123 18:59:23.160726 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:30.389231 sudo[1815]: pam_unix(sudo:session): session closed for user root Jan 23 18:59:30.434239 sshd[1814]: Connection closed by 68.220.241.50 port 49852 Jan 23 18:59:30.439102 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:30.450834 systemd[1]: sshd@8-172.238.168.154:22-68.220.241.50:49852.service: Deactivated successfully. Jan 23 18:59:30.462583 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:59:30.464088 systemd[1]: session-9.scope: Consumed 8.655s CPU time, 231.2M memory peak. Jan 23 18:59:30.470046 systemd-logind[1533]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:59:30.474354 systemd-logind[1533]: Removed session 9. Jan 23 18:59:37.070023 systemd[1]: Created slice kubepods-besteffort-podc48acce4_b103_4bee_a276_d657f8375504.slice - libcontainer container kubepods-besteffort-podc48acce4_b103_4bee_a276_d657f8375504.slice. Jan 23 18:59:37.230997 kubelet[2800]: I0123 18:59:37.230600 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c48acce4-b103-4bee-a276-d657f8375504-tigera-ca-bundle\") pod \"calico-typha-6bb5759bf-hlfz9\" (UID: \"c48acce4-b103-4bee-a276-d657f8375504\") " pod="calico-system/calico-typha-6bb5759bf-hlfz9" Jan 23 18:59:37.230997 kubelet[2800]: I0123 18:59:37.230732 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c48acce4-b103-4bee-a276-d657f8375504-typha-certs\") pod \"calico-typha-6bb5759bf-hlfz9\" (UID: \"c48acce4-b103-4bee-a276-d657f8375504\") " pod="calico-system/calico-typha-6bb5759bf-hlfz9" Jan 23 18:59:37.233108 kubelet[2800]: I0123 18:59:37.231188 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6296\" (UniqueName: \"kubernetes.io/projected/c48acce4-b103-4bee-a276-d657f8375504-kube-api-access-t6296\") pod \"calico-typha-6bb5759bf-hlfz9\" (UID: \"c48acce4-b103-4bee-a276-d657f8375504\") " pod="calico-system/calico-typha-6bb5759bf-hlfz9" Jan 23 18:59:37.467260 kubelet[2800]: E0123 18:59:37.467189 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:37.469658 containerd[1564]: time="2026-01-23T18:59:37.469510693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb5759bf-hlfz9,Uid:c48acce4-b103-4bee-a276-d657f8375504,Namespace:calico-system,Attempt:0,}" Jan 23 18:59:37.631220 systemd[1]: Created slice kubepods-besteffort-pod7b26187a_4dba_475c_9fb5_5901f2b2ca13.slice - libcontainer container kubepods-besteffort-pod7b26187a_4dba_475c_9fb5_5901f2b2ca13.slice. Jan 23 18:59:37.658930 containerd[1564]: time="2026-01-23T18:59:37.655082377Z" level=info msg="connecting to shim adc149f69d007b98f6b9fbad2a947c9261e92572e625ff50a0f19e023b2131f1" address="unix:///run/containerd/s/f6a54646685b868aa24c1929a76b3454d10f3b41e0f156b56c7299f8dc32ff9d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:37.725157 systemd[1]: Started cri-containerd-adc149f69d007b98f6b9fbad2a947c9261e92572e625ff50a0f19e023b2131f1.scope - libcontainer container adc149f69d007b98f6b9fbad2a947c9261e92572e625ff50a0f19e023b2131f1. Jan 23 18:59:37.776884 kubelet[2800]: I0123 18:59:37.763399 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7b26187a-4dba-475c-9fb5-5901f2b2ca13-node-certs\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.776884 kubelet[2800]: I0123 18:59:37.763524 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b26187a-4dba-475c-9fb5-5901f2b2ca13-xtables-lock\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.776884 kubelet[2800]: I0123 18:59:37.763589 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7b26187a-4dba-475c-9fb5-5901f2b2ca13-policysync\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.776884 kubelet[2800]: I0123 18:59:37.763620 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b26187a-4dba-475c-9fb5-5901f2b2ca13-tigera-ca-bundle\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.776884 kubelet[2800]: I0123 18:59:37.763836 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7b26187a-4dba-475c-9fb5-5901f2b2ca13-cni-net-dir\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.777289 kubelet[2800]: I0123 18:59:37.764070 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7b26187a-4dba-475c-9fb5-5901f2b2ca13-var-run-calico\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.777289 kubelet[2800]: I0123 18:59:37.764093 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7b26187a-4dba-475c-9fb5-5901f2b2ca13-cni-log-dir\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.777289 kubelet[2800]: I0123 18:59:37.764108 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7b26187a-4dba-475c-9fb5-5901f2b2ca13-flexvol-driver-host\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.777289 kubelet[2800]: I0123 18:59:37.764121 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b26187a-4dba-475c-9fb5-5901f2b2ca13-lib-modules\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.777289 kubelet[2800]: I0123 18:59:37.764138 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvfk5\" (UniqueName: \"kubernetes.io/projected/7b26187a-4dba-475c-9fb5-5901f2b2ca13-kube-api-access-bvfk5\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.777553 kubelet[2800]: I0123 18:59:37.764163 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7b26187a-4dba-475c-9fb5-5901f2b2ca13-cni-bin-dir\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.777553 kubelet[2800]: I0123 18:59:37.764183 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b26187a-4dba-475c-9fb5-5901f2b2ca13-var-lib-calico\") pod \"calico-node-ns4ld\" (UID: \"7b26187a-4dba-475c-9fb5-5901f2b2ca13\") " pod="calico-system/calico-node-ns4ld" Jan 23 18:59:37.777553 kubelet[2800]: E0123 18:59:37.770655 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 18:59:37.866983 kubelet[2800]: I0123 18:59:37.865239 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fs8r\" (UniqueName: \"kubernetes.io/projected/04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd-kube-api-access-4fs8r\") pod \"csi-node-driver-7pxmr\" (UID: \"04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd\") " pod="calico-system/csi-node-driver-7pxmr" Jan 23 18:59:37.867417 kubelet[2800]: I0123 18:59:37.867385 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd-kubelet-dir\") pod \"csi-node-driver-7pxmr\" (UID: \"04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd\") " pod="calico-system/csi-node-driver-7pxmr" Jan 23 18:59:37.867490 kubelet[2800]: I0123 18:59:37.867421 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd-socket-dir\") pod \"csi-node-driver-7pxmr\" (UID: \"04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd\") " pod="calico-system/csi-node-driver-7pxmr" Jan 23 18:59:37.867561 kubelet[2800]: I0123 18:59:37.867534 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd-registration-dir\") pod \"csi-node-driver-7pxmr\" (UID: \"04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd\") " pod="calico-system/csi-node-driver-7pxmr" Jan 23 18:59:37.867603 kubelet[2800]: I0123 18:59:37.867580 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd-varrun\") pod \"csi-node-driver-7pxmr\" (UID: \"04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd\") " pod="calico-system/csi-node-driver-7pxmr" Jan 23 18:59:37.871481 kubelet[2800]: E0123 18:59:37.871158 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.871481 kubelet[2800]: W0123 18:59:37.871201 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.871481 kubelet[2800]: E0123 18:59:37.871253 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.872075 kubelet[2800]: E0123 18:59:37.872042 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.872075 kubelet[2800]: W0123 18:59:37.872063 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.872244 kubelet[2800]: E0123 18:59:37.872078 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.872393 kubelet[2800]: E0123 18:59:37.872368 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.872393 kubelet[2800]: W0123 18:59:37.872387 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.872502 kubelet[2800]: E0123 18:59:37.872400 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.919182 kubelet[2800]: E0123 18:59:37.875085 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.919182 kubelet[2800]: W0123 18:59:37.875113 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.919182 kubelet[2800]: E0123 18:59:37.875132 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.919182 kubelet[2800]: E0123 18:59:37.875410 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.919182 kubelet[2800]: W0123 18:59:37.875421 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.919182 kubelet[2800]: E0123 18:59:37.875432 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.919182 kubelet[2800]: E0123 18:59:37.875855 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.919182 kubelet[2800]: W0123 18:59:37.875865 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.919182 kubelet[2800]: E0123 18:59:37.875875 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.919182 kubelet[2800]: E0123 18:59:37.876109 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.921963 kubelet[2800]: W0123 18:59:37.876121 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.921963 kubelet[2800]: E0123 18:59:37.876140 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.921963 kubelet[2800]: E0123 18:59:37.876428 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.921963 kubelet[2800]: W0123 18:59:37.876439 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.921963 kubelet[2800]: E0123 18:59:37.876450 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.921963 kubelet[2800]: E0123 18:59:37.876725 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.921963 kubelet[2800]: W0123 18:59:37.876735 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.921963 kubelet[2800]: E0123 18:59:37.876746 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.921963 kubelet[2800]: E0123 18:59:37.878099 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.921963 kubelet[2800]: W0123 18:59:37.878113 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.922240 kubelet[2800]: E0123 18:59:37.878127 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.922240 kubelet[2800]: E0123 18:59:37.878385 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.922240 kubelet[2800]: W0123 18:59:37.878395 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.922240 kubelet[2800]: E0123 18:59:37.878405 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.922240 kubelet[2800]: E0123 18:59:37.878838 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.922240 kubelet[2800]: W0123 18:59:37.878847 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.922240 kubelet[2800]: E0123 18:59:37.878858 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.922240 kubelet[2800]: E0123 18:59:37.921822 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.922240 kubelet[2800]: W0123 18:59:37.921843 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.922240 kubelet[2800]: E0123 18:59:37.921867 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.922509 kubelet[2800]: E0123 18:59:37.922442 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.922509 kubelet[2800]: W0123 18:59:37.922453 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.924513 kubelet[2800]: E0123 18:59:37.922598 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.924513 kubelet[2800]: E0123 18:59:37.923125 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.924513 kubelet[2800]: W0123 18:59:37.923137 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.924513 kubelet[2800]: E0123 18:59:37.923147 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.926627 kubelet[2800]: E0123 18:59:37.926535 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.926627 kubelet[2800]: W0123 18:59:37.926551 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.926627 kubelet[2800]: E0123 18:59:37.926565 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.936615 kubelet[2800]: E0123 18:59:37.936585 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.936615 kubelet[2800]: W0123 18:59:37.936603 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.936615 kubelet[2800]: E0123 18:59:37.936616 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.936876 kubelet[2800]: E0123 18:59:37.936855 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.936876 kubelet[2800]: W0123 18:59:37.936867 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.936876 kubelet[2800]: E0123 18:59:37.936877 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.937197 kubelet[2800]: E0123 18:59:37.937179 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.937197 kubelet[2800]: W0123 18:59:37.937194 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.937269 kubelet[2800]: E0123 18:59:37.937204 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.937488 kubelet[2800]: E0123 18:59:37.937452 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.937488 kubelet[2800]: W0123 18:59:37.937467 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.937488 kubelet[2800]: E0123 18:59:37.937476 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.938681 kubelet[2800]: E0123 18:59:37.937726 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.938681 kubelet[2800]: W0123 18:59:37.937737 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.938681 kubelet[2800]: E0123 18:59:37.937747 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.938681 kubelet[2800]: E0123 18:59:37.938010 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.938681 kubelet[2800]: W0123 18:59:37.938018 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.938681 kubelet[2800]: E0123 18:59:37.938026 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.938681 kubelet[2800]: E0123 18:59:37.938228 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.938681 kubelet[2800]: W0123 18:59:37.938236 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.938681 kubelet[2800]: E0123 18:59:37.938245 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.938681 kubelet[2800]: E0123 18:59:37.938500 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.939194 kubelet[2800]: W0123 18:59:37.938508 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.939194 kubelet[2800]: E0123 18:59:37.938517 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.939194 kubelet[2800]: E0123 18:59:37.938768 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.939194 kubelet[2800]: W0123 18:59:37.938776 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.939194 kubelet[2800]: E0123 18:59:37.938784 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.939194 kubelet[2800]: E0123 18:59:37.939022 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.939194 kubelet[2800]: W0123 18:59:37.939030 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.939194 kubelet[2800]: E0123 18:59:37.939039 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.939453 kubelet[2800]: E0123 18:59:37.939325 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.939453 kubelet[2800]: W0123 18:59:37.939333 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.939453 kubelet[2800]: E0123 18:59:37.939342 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.939706 kubelet[2800]: E0123 18:59:37.939625 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.939706 kubelet[2800]: W0123 18:59:37.939633 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.939706 kubelet[2800]: E0123 18:59:37.939642 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.940087 kubelet[2800]: E0123 18:59:37.940066 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.940087 kubelet[2800]: W0123 18:59:37.940078 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.940161 kubelet[2800]: E0123 18:59:37.940101 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.966713 kubelet[2800]: E0123 18:59:37.966675 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:37.967311 containerd[1564]: time="2026-01-23T18:59:37.967280859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ns4ld,Uid:7b26187a-4dba-475c-9fb5-5901f2b2ca13,Namespace:calico-system,Attempt:0,}" Jan 23 18:59:37.968088 kubelet[2800]: E0123 18:59:37.968052 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.968088 kubelet[2800]: W0123 18:59:37.968072 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.968088 kubelet[2800]: E0123 18:59:37.968090 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.968435 kubelet[2800]: E0123 18:59:37.968402 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.968435 kubelet[2800]: W0123 18:59:37.968419 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.968435 kubelet[2800]: E0123 18:59:37.968429 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.968843 kubelet[2800]: E0123 18:59:37.968827 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.968843 kubelet[2800]: W0123 18:59:37.968840 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.968943 kubelet[2800]: E0123 18:59:37.968850 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.969548 kubelet[2800]: E0123 18:59:37.969482 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.969548 kubelet[2800]: W0123 18:59:37.969496 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.969548 kubelet[2800]: E0123 18:59:37.969509 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.970573 kubelet[2800]: E0123 18:59:37.970485 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.970573 kubelet[2800]: W0123 18:59:37.970498 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.970573 kubelet[2800]: E0123 18:59:37.970509 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.972416 kubelet[2800]: E0123 18:59:37.972339 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.972416 kubelet[2800]: W0123 18:59:37.972353 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.972416 kubelet[2800]: E0123 18:59:37.972364 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.974172 kubelet[2800]: E0123 18:59:37.973977 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.974460 kubelet[2800]: W0123 18:59:37.974285 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.974460 kubelet[2800]: E0123 18:59:37.974305 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.975493 kubelet[2800]: E0123 18:59:37.975277 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.976127 kubelet[2800]: W0123 18:59:37.975549 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.976127 kubelet[2800]: E0123 18:59:37.976050 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.980868 kubelet[2800]: E0123 18:59:37.979624 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.980868 kubelet[2800]: W0123 18:59:37.979639 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.980868 kubelet[2800]: E0123 18:59:37.979651 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.980868 kubelet[2800]: E0123 18:59:37.980154 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.980868 kubelet[2800]: W0123 18:59:37.980164 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.980868 kubelet[2800]: E0123 18:59:37.980174 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.982954 kubelet[2800]: E0123 18:59:37.982354 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.982954 kubelet[2800]: W0123 18:59:37.982406 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.982954 kubelet[2800]: E0123 18:59:37.982417 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.983268 kubelet[2800]: E0123 18:59:37.983109 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.983268 kubelet[2800]: W0123 18:59:37.983120 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.983268 kubelet[2800]: E0123 18:59:37.983137 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.984931 kubelet[2800]: E0123 18:59:37.984737 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.985026 kubelet[2800]: W0123 18:59:37.984992 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.985026 kubelet[2800]: E0123 18:59:37.985010 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.987312 kubelet[2800]: E0123 18:59:37.987098 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.987312 kubelet[2800]: W0123 18:59:37.987138 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.987312 kubelet[2800]: E0123 18:59:37.987151 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.987730 kubelet[2800]: E0123 18:59:37.987691 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.987730 kubelet[2800]: W0123 18:59:37.987704 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.987730 kubelet[2800]: E0123 18:59:37.987715 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.991156 kubelet[2800]: E0123 18:59:37.989321 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.991156 kubelet[2800]: W0123 18:59:37.989335 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.991156 kubelet[2800]: E0123 18:59:37.990980 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.991799 kubelet[2800]: E0123 18:59:37.991488 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.991799 kubelet[2800]: W0123 18:59:37.991611 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.991799 kubelet[2800]: E0123 18:59:37.991626 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.992311 kubelet[2800]: E0123 18:59:37.992277 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.992311 kubelet[2800]: W0123 18:59:37.992291 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.992581 kubelet[2800]: E0123 18:59:37.992553 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.993871 kubelet[2800]: E0123 18:59:37.993312 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.993871 kubelet[2800]: W0123 18:59:37.993328 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.993871 kubelet[2800]: E0123 18:59:37.993450 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.994270 kubelet[2800]: E0123 18:59:37.994029 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.994270 kubelet[2800]: W0123 18:59:37.994066 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.994270 kubelet[2800]: E0123 18:59:37.994113 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.995525 kubelet[2800]: E0123 18:59:37.994754 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.995525 kubelet[2800]: W0123 18:59:37.995431 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.995525 kubelet[2800]: E0123 18:59:37.995455 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.995845 kubelet[2800]: E0123 18:59:37.995704 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.995845 kubelet[2800]: W0123 18:59:37.995716 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.995845 kubelet[2800]: E0123 18:59:37.995726 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:37.999451 kubelet[2800]: E0123 18:59:37.997965 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:37.999451 kubelet[2800]: W0123 18:59:37.997979 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:37.999451 kubelet[2800]: E0123 18:59:37.997991 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:38.000732 kubelet[2800]: E0123 18:59:38.000715 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:38.000951 kubelet[2800]: W0123 18:59:38.000936 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:38.002387 kubelet[2800]: E0123 18:59:38.002368 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:38.003847 kubelet[2800]: E0123 18:59:38.003831 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:38.005982 kubelet[2800]: W0123 18:59:38.005934 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:38.005982 kubelet[2800]: E0123 18:59:38.005956 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:38.007803 containerd[1564]: time="2026-01-23T18:59:38.007773395Z" level=info msg="connecting to shim 1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1" address="unix:///run/containerd/s/a9104838643c71530731197dfe91e6c14eba354dc37a4a458e9674ebdf110b89" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:59:38.055974 kubelet[2800]: E0123 18:59:38.045233 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:38.055974 kubelet[2800]: W0123 18:59:38.045262 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:38.055974 kubelet[2800]: E0123 18:59:38.045283 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:38.095567 containerd[1564]: time="2026-01-23T18:59:38.095527096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb5759bf-hlfz9,Uid:c48acce4-b103-4bee-a276-d657f8375504,Namespace:calico-system,Attempt:0,} returns sandbox id \"adc149f69d007b98f6b9fbad2a947c9261e92572e625ff50a0f19e023b2131f1\"" Jan 23 18:59:38.096768 kubelet[2800]: E0123 18:59:38.096735 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:38.101093 containerd[1564]: time="2026-01-23T18:59:38.101066243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:59:38.128126 systemd[1]: Started cri-containerd-1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1.scope - libcontainer container 1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1. Jan 23 18:59:38.251044 containerd[1564]: time="2026-01-23T18:59:38.250342107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ns4ld,Uid:7b26187a-4dba-475c-9fb5-5901f2b2ca13,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1\"" Jan 23 18:59:38.252219 kubelet[2800]: E0123 18:59:38.252179 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:39.077869 kubelet[2800]: E0123 18:59:39.077215 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 18:59:39.153678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603480085.mount: Deactivated successfully. Jan 23 18:59:40.731495 containerd[1564]: time="2026-01-23T18:59:40.731427483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:40.732645 containerd[1564]: time="2026-01-23T18:59:40.732361586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 18:59:40.733407 containerd[1564]: time="2026-01-23T18:59:40.733357059Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:40.735639 containerd[1564]: time="2026-01-23T18:59:40.735614035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:40.736564 containerd[1564]: time="2026-01-23T18:59:40.736517747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.635288494s" Jan 23 18:59:40.736564 containerd[1564]: time="2026-01-23T18:59:40.736563137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:59:40.739203 containerd[1564]: time="2026-01-23T18:59:40.739124544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:59:40.762098 containerd[1564]: time="2026-01-23T18:59:40.762043365Z" level=info msg="CreateContainer within sandbox \"adc149f69d007b98f6b9fbad2a947c9261e92572e625ff50a0f19e023b2131f1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:59:40.768955 containerd[1564]: time="2026-01-23T18:59:40.768126251Z" level=info msg="Container 949074260644ec8435d5ed8a12ded6ab2b96f7a3285eaa19e3405f7ae330c22f: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:40.778046 containerd[1564]: time="2026-01-23T18:59:40.777803057Z" level=info msg="CreateContainer within sandbox \"adc149f69d007b98f6b9fbad2a947c9261e92572e625ff50a0f19e023b2131f1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"949074260644ec8435d5ed8a12ded6ab2b96f7a3285eaa19e3405f7ae330c22f\"" Jan 23 18:59:40.779093 containerd[1564]: time="2026-01-23T18:59:40.779057251Z" level=info msg="StartContainer for \"949074260644ec8435d5ed8a12ded6ab2b96f7a3285eaa19e3405f7ae330c22f\"" Jan 23 18:59:40.780556 containerd[1564]: time="2026-01-23T18:59:40.780489684Z" level=info msg="connecting to shim 949074260644ec8435d5ed8a12ded6ab2b96f7a3285eaa19e3405f7ae330c22f" address="unix:///run/containerd/s/f6a54646685b868aa24c1929a76b3454d10f3b41e0f156b56c7299f8dc32ff9d" protocol=ttrpc version=3 Jan 23 18:59:41.063255 systemd[1]: Started cri-containerd-949074260644ec8435d5ed8a12ded6ab2b96f7a3285eaa19e3405f7ae330c22f.scope - libcontainer container 949074260644ec8435d5ed8a12ded6ab2b96f7a3285eaa19e3405f7ae330c22f. Jan 23 18:59:41.080444 kubelet[2800]: E0123 18:59:41.076385 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 18:59:41.211881 containerd[1564]: time="2026-01-23T18:59:41.211773592Z" level=info msg="StartContainer for \"949074260644ec8435d5ed8a12ded6ab2b96f7a3285eaa19e3405f7ae330c22f\" returns successfully" Jan 23 18:59:41.248952 kubelet[2800]: E0123 18:59:41.248683 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:41.351167 kubelet[2800]: E0123 18:59:41.349654 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.351167 kubelet[2800]: W0123 18:59:41.349731 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.351167 kubelet[2800]: E0123 18:59:41.349757 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.351543 kubelet[2800]: E0123 18:59:41.351413 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.351543 kubelet[2800]: W0123 18:59:41.351428 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.351543 kubelet[2800]: E0123 18:59:41.351465 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.357884 kubelet[2800]: E0123 18:59:41.356145 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.357884 kubelet[2800]: W0123 18:59:41.356162 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.357884 kubelet[2800]: E0123 18:59:41.356178 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.357884 kubelet[2800]: E0123 18:59:41.357539 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.357884 kubelet[2800]: W0123 18:59:41.357551 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.357884 kubelet[2800]: E0123 18:59:41.357563 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.358478 kubelet[2800]: E0123 18:59:41.358465 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.358628 kubelet[2800]: W0123 18:59:41.358606 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.358744 kubelet[2800]: E0123 18:59:41.358707 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.359948 kubelet[2800]: E0123 18:59:41.359931 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.360268 kubelet[2800]: W0123 18:59:41.360253 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.360403 kubelet[2800]: E0123 18:59:41.360389 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.361280 kubelet[2800]: E0123 18:59:41.361253 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.361371 kubelet[2800]: W0123 18:59:41.361359 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.361444 kubelet[2800]: E0123 18:59:41.361431 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.364430 kubelet[2800]: E0123 18:59:41.363978 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.364430 kubelet[2800]: W0123 18:59:41.364016 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.364430 kubelet[2800]: E0123 18:59:41.364028 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.365610 kubelet[2800]: E0123 18:59:41.365196 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.365610 kubelet[2800]: W0123 18:59:41.365340 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.365610 kubelet[2800]: E0123 18:59:41.365355 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.366661 kubelet[2800]: E0123 18:59:41.366396 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.366661 kubelet[2800]: W0123 18:59:41.366418 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.366661 kubelet[2800]: E0123 18:59:41.366431 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.367999 kubelet[2800]: E0123 18:59:41.367923 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.367999 kubelet[2800]: W0123 18:59:41.367938 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.367999 kubelet[2800]: E0123 18:59:41.367950 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.369037 kubelet[2800]: E0123 18:59:41.369014 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.369037 kubelet[2800]: W0123 18:59:41.369032 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.370078 kubelet[2800]: E0123 18:59:41.369044 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.371260 kubelet[2800]: E0123 18:59:41.371235 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.371260 kubelet[2800]: W0123 18:59:41.371250 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.371260 kubelet[2800]: E0123 18:59:41.371261 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.371503 kubelet[2800]: E0123 18:59:41.371468 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.371503 kubelet[2800]: W0123 18:59:41.371483 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.371503 kubelet[2800]: E0123 18:59:41.371493 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.372024 kubelet[2800]: E0123 18:59:41.372004 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.372024 kubelet[2800]: W0123 18:59:41.372020 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.372082 kubelet[2800]: E0123 18:59:41.372031 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.373110 kubelet[2800]: E0123 18:59:41.373077 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.373110 kubelet[2800]: W0123 18:59:41.373097 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.373110 kubelet[2800]: E0123 18:59:41.373110 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.374624 kubelet[2800]: E0123 18:59:41.374599 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.374688 kubelet[2800]: W0123 18:59:41.374627 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.374688 kubelet[2800]: E0123 18:59:41.374640 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.376302 kubelet[2800]: E0123 18:59:41.375070 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.376302 kubelet[2800]: W0123 18:59:41.375086 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.376302 kubelet[2800]: E0123 18:59:41.375098 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.376302 kubelet[2800]: E0123 18:59:41.375592 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.376302 kubelet[2800]: W0123 18:59:41.375602 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.376302 kubelet[2800]: E0123 18:59:41.375979 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.376469 kubelet[2800]: E0123 18:59:41.376440 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.376469 kubelet[2800]: W0123 18:59:41.376450 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.376469 kubelet[2800]: E0123 18:59:41.376462 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.378362 kubelet[2800]: E0123 18:59:41.378249 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.378362 kubelet[2800]: W0123 18:59:41.378269 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.378362 kubelet[2800]: E0123 18:59:41.378282 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.378684 kubelet[2800]: E0123 18:59:41.378665 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.378684 kubelet[2800]: W0123 18:59:41.378682 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.379062 kubelet[2800]: E0123 18:59:41.378691 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396161 kubelet[2800]: E0123 18:59:41.380233 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396161 kubelet[2800]: W0123 18:59:41.380247 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396161 kubelet[2800]: E0123 18:59:41.380258 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396161 kubelet[2800]: E0123 18:59:41.380578 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396161 kubelet[2800]: W0123 18:59:41.380588 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396161 kubelet[2800]: E0123 18:59:41.380597 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396161 kubelet[2800]: E0123 18:59:41.381253 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396161 kubelet[2800]: W0123 18:59:41.381264 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396161 kubelet[2800]: E0123 18:59:41.381275 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396161 kubelet[2800]: E0123 18:59:41.382239 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396471 kubelet[2800]: W0123 18:59:41.382250 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396471 kubelet[2800]: E0123 18:59:41.382261 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396471 kubelet[2800]: E0123 18:59:41.383203 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396471 kubelet[2800]: W0123 18:59:41.383214 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396471 kubelet[2800]: E0123 18:59:41.383225 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396471 kubelet[2800]: E0123 18:59:41.385180 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396471 kubelet[2800]: W0123 18:59:41.385191 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396471 kubelet[2800]: E0123 18:59:41.385202 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396471 kubelet[2800]: E0123 18:59:41.385515 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396471 kubelet[2800]: W0123 18:59:41.385523 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396671 kubelet[2800]: E0123 18:59:41.385533 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396671 kubelet[2800]: E0123 18:59:41.386363 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396671 kubelet[2800]: W0123 18:59:41.386373 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396671 kubelet[2800]: E0123 18:59:41.386408 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396671 kubelet[2800]: E0123 18:59:41.387040 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396671 kubelet[2800]: W0123 18:59:41.388398 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396671 kubelet[2800]: E0123 18:59:41.388416 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396671 kubelet[2800]: E0123 18:59:41.389788 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.396671 kubelet[2800]: W0123 18:59:41.389799 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.396671 kubelet[2800]: E0123 18:59:41.389809 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.396671 kubelet[2800]: E0123 18:59:41.390648 2800 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:59:41.397027 kubelet[2800]: W0123 18:59:41.390659 2800 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:59:41.397027 kubelet[2800]: E0123 18:59:41.390671 2800 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:59:41.620015 containerd[1564]: time="2026-01-23T18:59:41.619716579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:41.622726 containerd[1564]: time="2026-01-23T18:59:41.622690646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 18:59:41.623814 containerd[1564]: time="2026-01-23T18:59:41.623783099Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:41.627890 containerd[1564]: time="2026-01-23T18:59:41.627852380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:41.629375 containerd[1564]: time="2026-01-23T18:59:41.629342293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 890.177269ms" Jan 23 18:59:41.629422 containerd[1564]: time="2026-01-23T18:59:41.629391103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:59:41.634822 containerd[1564]: time="2026-01-23T18:59:41.634760727Z" level=info msg="CreateContainer within sandbox \"1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:59:41.646657 containerd[1564]: time="2026-01-23T18:59:41.646610507Z" level=info msg="Container 62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:41.658005 containerd[1564]: time="2026-01-23T18:59:41.657952826Z" level=info msg="CreateContainer within sandbox \"1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700\"" Jan 23 18:59:41.660424 containerd[1564]: time="2026-01-23T18:59:41.659586510Z" level=info msg="StartContainer for \"62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700\"" Jan 23 18:59:41.667423 containerd[1564]: time="2026-01-23T18:59:41.667381019Z" level=info msg="connecting to shim 62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700" address="unix:///run/containerd/s/a9104838643c71530731197dfe91e6c14eba354dc37a4a458e9674ebdf110b89" protocol=ttrpc version=3 Jan 23 18:59:41.781481 systemd[1]: Started cri-containerd-62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700.scope - libcontainer container 62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700. Jan 23 18:59:41.988177 containerd[1564]: time="2026-01-23T18:59:41.988110165Z" level=info msg="StartContainer for \"62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700\" returns successfully" Jan 23 18:59:42.220584 systemd[1]: cri-containerd-62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700.scope: Deactivated successfully. Jan 23 18:59:42.222406 containerd[1564]: time="2026-01-23T18:59:42.222214722Z" level=info msg="received container exit event container_id:\"62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700\" id:\"62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700\" pid:3457 exited_at:{seconds:1769194782 nanos:220445308}" Jan 23 18:59:42.253837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62be4193bc92bb54038a273692fbf112ad101f7f32e96cf65386a1a88008c700-rootfs.mount: Deactivated successfully. Jan 23 18:59:42.261148 kubelet[2800]: I0123 18:59:42.261090 2800 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:59:42.261628 kubelet[2800]: E0123 18:59:42.261449 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:42.262253 kubelet[2800]: E0123 18:59:42.262233 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:42.290811 kubelet[2800]: I0123 18:59:42.288634 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bb5759bf-hlfz9" podStartSLOduration=2.649302816 podStartE2EDuration="5.288576482s" podCreationTimestamp="2026-01-23 18:59:37 +0000 UTC" firstStartedPulling="2026-01-23 18:59:38.098867605 +0000 UTC m=+25.277762376" lastFinishedPulling="2026-01-23 18:59:40.738141281 +0000 UTC m=+27.917036042" observedRunningTime="2026-01-23 18:59:41.264991706 +0000 UTC m=+28.443886487" watchObservedRunningTime="2026-01-23 18:59:42.288576482 +0000 UTC m=+29.467471243" Jan 23 18:59:43.077956 kubelet[2800]: E0123 18:59:43.077084 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 18:59:43.266338 kubelet[2800]: E0123 18:59:43.266286 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:43.268942 containerd[1564]: time="2026-01-23T18:59:43.268537226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:59:45.079301 kubelet[2800]: E0123 18:59:45.079150 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 18:59:47.092941 kubelet[2800]: E0123 18:59:47.078838 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 18:59:47.800113 containerd[1564]: time="2026-01-23T18:59:47.799263163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:47.801614 containerd[1564]: time="2026-01-23T18:59:47.801537957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 18:59:47.802036 containerd[1564]: time="2026-01-23T18:59:47.801638137Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:47.807504 containerd[1564]: time="2026-01-23T18:59:47.806312727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:59:47.807504 containerd[1564]: time="2026-01-23T18:59:47.807244358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.538639511s" Jan 23 18:59:47.807504 containerd[1564]: time="2026-01-23T18:59:47.807320838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:59:47.814737 containerd[1564]: time="2026-01-23T18:59:47.814706872Z" level=info msg="CreateContainer within sandbox \"1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:59:47.843958 containerd[1564]: time="2026-01-23T18:59:47.843272186Z" level=info msg="Container a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:59:47.846734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688646709.mount: Deactivated successfully. Jan 23 18:59:47.866404 containerd[1564]: time="2026-01-23T18:59:47.866283600Z" level=info msg="CreateContainer within sandbox \"1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad\"" Jan 23 18:59:47.867567 containerd[1564]: time="2026-01-23T18:59:47.867545181Z" level=info msg="StartContainer for \"a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad\"" Jan 23 18:59:47.870479 containerd[1564]: time="2026-01-23T18:59:47.870341387Z" level=info msg="connecting to shim a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad" address="unix:///run/containerd/s/a9104838643c71530731197dfe91e6c14eba354dc37a4a458e9674ebdf110b89" protocol=ttrpc version=3 Jan 23 18:59:47.956360 systemd[1]: Started cri-containerd-a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad.scope - libcontainer container a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad. Jan 23 18:59:48.248935 containerd[1564]: time="2026-01-23T18:59:48.248458910Z" level=info msg="StartContainer for \"a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad\" returns successfully" Jan 23 18:59:48.300044 kubelet[2800]: E0123 18:59:48.299283 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:49.078230 kubelet[2800]: E0123 18:59:49.077821 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 18:59:49.310356 kubelet[2800]: E0123 18:59:49.301878 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:51.085579 kubelet[2800]: E0123 18:59:51.084572 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 18:59:51.145068 systemd[1]: cri-containerd-a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad.scope: Deactivated successfully. Jan 23 18:59:51.145571 systemd[1]: cri-containerd-a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad.scope: Consumed 3.087s CPU time, 194.4M memory peak, 171.3M written to disk. Jan 23 18:59:51.151922 containerd[1564]: time="2026-01-23T18:59:51.151856752Z" level=info msg="received container exit event container_id:\"a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad\" id:\"a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad\" pid:3516 exited_at:{seconds:1769194791 nanos:151169661}" Jan 23 18:59:51.182486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a50ce2b076c217e084e8bbfd75a8d7e94b9dde07b1cf1eaeec9135b3dc4385ad-rootfs.mount: Deactivated successfully. Jan 23 18:59:51.206100 kubelet[2800]: I0123 18:59:51.206061 2800 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 18:59:51.346550 kubelet[2800]: E0123 18:59:51.344849 2800 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-vm7hl\" is forbidden: User \"system:node:172-238-168-154\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-168-154' and this object" podUID="c8facf2d-db59-43b4-b75d-c18e88cb697f" pod="kube-system/coredns-66bc5c9577-vm7hl" Jan 23 18:59:51.348601 kubelet[2800]: E0123 18:59:51.347470 2800 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:172-238-168-154\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-168-154' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap" Jan 23 18:59:51.351801 systemd[1]: Created slice kubepods-burstable-podc8facf2d_db59_43b4_b75d_c18e88cb697f.slice - libcontainer container kubepods-burstable-podc8facf2d_db59_43b4_b75d_c18e88cb697f.slice. Jan 23 18:59:51.358177 kubelet[2800]: E0123 18:59:51.357229 2800 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-vm7hl\" is forbidden: User \"system:node:172-238-168-154\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-168-154' and this object" podUID="c8facf2d-db59-43b4-b75d-c18e88cb697f" pod="kube-system/coredns-66bc5c9577-vm7hl" Jan 23 18:59:51.366212 systemd[1]: Created slice kubepods-burstable-pod39fc028f_9ecf_48af_ba4d_3da48a6fc889.slice - libcontainer container kubepods-burstable-pod39fc028f_9ecf_48af_ba4d_3da48a6fc889.slice. Jan 23 18:59:51.398340 systemd[1]: Created slice kubepods-besteffort-podc3f054e1_5748_465f_ba13_4eeba844abd6.slice - libcontainer container kubepods-besteffort-podc3f054e1_5748_465f_ba13_4eeba844abd6.slice. Jan 23 18:59:51.415913 systemd[1]: Created slice kubepods-besteffort-pod9b7d7d7b_9f3b_4806_a4e8_308622bc18c5.slice - libcontainer container kubepods-besteffort-pod9b7d7d7b_9f3b_4806_a4e8_308622bc18c5.slice. Jan 23 18:59:51.427607 systemd[1]: Created slice kubepods-besteffort-pod53afd191_0189_457d_b022_3c4e010c308d.slice - libcontainer container kubepods-besteffort-pod53afd191_0189_457d_b022_3c4e010c308d.slice. Jan 23 18:59:51.442462 systemd[1]: Created slice kubepods-besteffort-pod6cc25f9b_f232_47b9_8c25_dc08c13b1bb7.slice - libcontainer container kubepods-besteffort-pod6cc25f9b_f232_47b9_8c25_dc08c13b1bb7.slice. Jan 23 18:59:51.454721 systemd[1]: Created slice kubepods-besteffort-pod26eb3ddf_a640_4f9f_b12e_8ab0e31fdfab.slice - libcontainer container kubepods-besteffort-pod26eb3ddf_a640_4f9f_b12e_8ab0e31fdfab.slice. Jan 23 18:59:51.499668 kubelet[2800]: I0123 18:59:51.499617 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc25f9b-f232-47b9-8c25-dc08c13b1bb7-config\") pod \"goldmane-7c778bb748-z4wzh\" (UID: \"6cc25f9b-f232-47b9-8c25-dc08c13b1bb7\") " pod="calico-system/goldmane-7c778bb748-z4wzh" Jan 23 18:59:51.500135 kubelet[2800]: I0123 18:59:51.500043 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc25f9b-f232-47b9-8c25-dc08c13b1bb7-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-z4wzh\" (UID: \"6cc25f9b-f232-47b9-8c25-dc08c13b1bb7\") " pod="calico-system/goldmane-7c778bb748-z4wzh" Jan 23 18:59:51.500135 kubelet[2800]: I0123 18:59:51.500070 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3f054e1-5748-465f-ba13-4eeba844abd6-whisker-backend-key-pair\") pod \"whisker-84b8fb74fd-tjzsj\" (UID: \"c3f054e1-5748-465f-ba13-4eeba844abd6\") " pod="calico-system/whisker-84b8fb74fd-tjzsj" Jan 23 18:59:51.500135 kubelet[2800]: I0123 18:59:51.500114 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6cc25f9b-f232-47b9-8c25-dc08c13b1bb7-goldmane-key-pair\") pod \"goldmane-7c778bb748-z4wzh\" (UID: \"6cc25f9b-f232-47b9-8c25-dc08c13b1bb7\") " pod="calico-system/goldmane-7c778bb748-z4wzh" Jan 23 18:59:51.502802 kubelet[2800]: I0123 18:59:51.500145 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab-calico-apiserver-certs\") pod \"calico-apiserver-7cf455955c-jr65c\" (UID: \"26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab\") " pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" Jan 23 18:59:51.502975 kubelet[2800]: I0123 18:59:51.502832 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b7d7d7b-9f3b-4806-a4e8-308622bc18c5-tigera-ca-bundle\") pod \"calico-kube-controllers-7c474f47cb-gsdmr\" (UID: \"9b7d7d7b-9f3b-4806-a4e8-308622bc18c5\") " pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" Jan 23 18:59:51.502975 kubelet[2800]: I0123 18:59:51.502856 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kmw\" (UniqueName: \"kubernetes.io/projected/c3f054e1-5748-465f-ba13-4eeba844abd6-kube-api-access-r7kmw\") pod \"whisker-84b8fb74fd-tjzsj\" (UID: \"c3f054e1-5748-465f-ba13-4eeba844abd6\") " pod="calico-system/whisker-84b8fb74fd-tjzsj" Jan 23 18:59:51.502975 kubelet[2800]: I0123 18:59:51.502879 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgld\" (UniqueName: \"kubernetes.io/projected/9b7d7d7b-9f3b-4806-a4e8-308622bc18c5-kube-api-access-ptgld\") pod \"calico-kube-controllers-7c474f47cb-gsdmr\" (UID: \"9b7d7d7b-9f3b-4806-a4e8-308622bc18c5\") " pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" Jan 23 18:59:51.502975 kubelet[2800]: I0123 18:59:51.502938 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39fc028f-9ecf-48af-ba4d-3da48a6fc889-config-volume\") pod \"coredns-66bc5c9577-sxppt\" (UID: \"39fc028f-9ecf-48af-ba4d-3da48a6fc889\") " pod="kube-system/coredns-66bc5c9577-sxppt" Jan 23 18:59:51.502975 kubelet[2800]: I0123 18:59:51.502955 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc45j\" (UniqueName: \"kubernetes.io/projected/c8facf2d-db59-43b4-b75d-c18e88cb697f-kube-api-access-gc45j\") pod \"coredns-66bc5c9577-vm7hl\" (UID: \"c8facf2d-db59-43b4-b75d-c18e88cb697f\") " pod="kube-system/coredns-66bc5c9577-vm7hl" Jan 23 18:59:51.503685 kubelet[2800]: I0123 18:59:51.502979 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x72vz\" (UniqueName: \"kubernetes.io/projected/39fc028f-9ecf-48af-ba4d-3da48a6fc889-kube-api-access-x72vz\") pod \"coredns-66bc5c9577-sxppt\" (UID: \"39fc028f-9ecf-48af-ba4d-3da48a6fc889\") " pod="kube-system/coredns-66bc5c9577-sxppt" Jan 23 18:59:51.503685 kubelet[2800]: I0123 18:59:51.503011 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f054e1-5748-465f-ba13-4eeba844abd6-whisker-ca-bundle\") pod \"whisker-84b8fb74fd-tjzsj\" (UID: \"c3f054e1-5748-465f-ba13-4eeba844abd6\") " pod="calico-system/whisker-84b8fb74fd-tjzsj" Jan 23 18:59:51.503685 kubelet[2800]: I0123 18:59:51.503025 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/53afd191-0189-457d-b022-3c4e010c308d-calico-apiserver-certs\") pod \"calico-apiserver-7cf455955c-czmzf\" (UID: \"53afd191-0189-457d-b022-3c4e010c308d\") " pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" Jan 23 18:59:51.503685 kubelet[2800]: I0123 18:59:51.503044 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwjrh\" (UniqueName: \"kubernetes.io/projected/53afd191-0189-457d-b022-3c4e010c308d-kube-api-access-qwjrh\") pod \"calico-apiserver-7cf455955c-czmzf\" (UID: \"53afd191-0189-457d-b022-3c4e010c308d\") " pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" Jan 23 18:59:51.503685 kubelet[2800]: I0123 18:59:51.503084 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txmbp\" (UniqueName: \"kubernetes.io/projected/26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab-kube-api-access-txmbp\") pod \"calico-apiserver-7cf455955c-jr65c\" (UID: \"26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab\") " pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" Jan 23 18:59:51.504071 kubelet[2800]: I0123 18:59:51.503299 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqxpx\" (UniqueName: \"kubernetes.io/projected/6cc25f9b-f232-47b9-8c25-dc08c13b1bb7-kube-api-access-cqxpx\") pod \"goldmane-7c778bb748-z4wzh\" (UID: \"6cc25f9b-f232-47b9-8c25-dc08c13b1bb7\") " pod="calico-system/goldmane-7c778bb748-z4wzh" Jan 23 18:59:51.504071 kubelet[2800]: I0123 18:59:51.503322 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8facf2d-db59-43b4-b75d-c18e88cb697f-config-volume\") pod \"coredns-66bc5c9577-vm7hl\" (UID: \"c8facf2d-db59-43b4-b75d-c18e88cb697f\") " pod="kube-system/coredns-66bc5c9577-vm7hl" Jan 23 18:59:51.711174 containerd[1564]: time="2026-01-23T18:59:51.711099221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84b8fb74fd-tjzsj,Uid:c3f054e1-5748-465f-ba13-4eeba844abd6,Namespace:calico-system,Attempt:0,}" Jan 23 18:59:51.737762 containerd[1564]: time="2026-01-23T18:59:51.737386302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c474f47cb-gsdmr,Uid:9b7d7d7b-9f3b-4806-a4e8-308622bc18c5,Namespace:calico-system,Attempt:0,}" Jan 23 18:59:51.740756 containerd[1564]: time="2026-01-23T18:59:51.740599927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf455955c-czmzf,Uid:53afd191-0189-457d-b022-3c4e010c308d,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:59:51.750086 containerd[1564]: time="2026-01-23T18:59:51.749965421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-z4wzh,Uid:6cc25f9b-f232-47b9-8c25-dc08c13b1bb7,Namespace:calico-system,Attempt:0,}" Jan 23 18:59:51.767063 containerd[1564]: time="2026-01-23T18:59:51.767004928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf455955c-jr65c,Uid:26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:59:51.932927 containerd[1564]: time="2026-01-23T18:59:51.932863319Z" level=error msg="Failed to destroy network for sandbox \"a2895a9865efed0044f4d5f5dfc9cd4a94b922e26fb6dbb53fc49fd275df423d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:51.934347 containerd[1564]: time="2026-01-23T18:59:51.934282941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf455955c-czmzf,Uid:53afd191-0189-457d-b022-3c4e010c308d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2895a9865efed0044f4d5f5dfc9cd4a94b922e26fb6dbb53fc49fd275df423d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:51.934879 kubelet[2800]: E0123 18:59:51.934589 2800 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2895a9865efed0044f4d5f5dfc9cd4a94b922e26fb6dbb53fc49fd275df423d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:51.934879 kubelet[2800]: E0123 18:59:51.934723 2800 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2895a9865efed0044f4d5f5dfc9cd4a94b922e26fb6dbb53fc49fd275df423d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" Jan 23 18:59:51.934879 kubelet[2800]: E0123 18:59:51.934754 2800 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2895a9865efed0044f4d5f5dfc9cd4a94b922e26fb6dbb53fc49fd275df423d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" Jan 23 18:59:51.935058 kubelet[2800]: E0123 18:59:51.934820 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf455955c-czmzf_calico-apiserver(53afd191-0189-457d-b022-3c4e010c308d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf455955c-czmzf_calico-apiserver(53afd191-0189-457d-b022-3c4e010c308d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2895a9865efed0044f4d5f5dfc9cd4a94b922e26fb6dbb53fc49fd275df423d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 18:59:51.971515 containerd[1564]: time="2026-01-23T18:59:51.971267199Z" level=error msg="Failed to destroy network for sandbox \"9b999e587c6e4a6cbbc0186e2ff65043a9abdd3a03e22a9c36feefbfbb779411\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:51.975739 containerd[1564]: time="2026-01-23T18:59:51.975604115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-z4wzh,Uid:6cc25f9b-f232-47b9-8c25-dc08c13b1bb7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b999e587c6e4a6cbbc0186e2ff65043a9abdd3a03e22a9c36feefbfbb779411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:51.976458 kubelet[2800]: E0123 18:59:51.976239 2800 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b999e587c6e4a6cbbc0186e2ff65043a9abdd3a03e22a9c36feefbfbb779411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:51.976458 kubelet[2800]: E0123 18:59:51.976323 2800 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b999e587c6e4a6cbbc0186e2ff65043a9abdd3a03e22a9c36feefbfbb779411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-z4wzh" Jan 23 18:59:51.976458 kubelet[2800]: E0123 18:59:51.976389 2800 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b999e587c6e4a6cbbc0186e2ff65043a9abdd3a03e22a9c36feefbfbb779411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-z4wzh" Jan 23 18:59:51.976837 kubelet[2800]: E0123 18:59:51.976460 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-z4wzh_calico-system(6cc25f9b-f232-47b9-8c25-dc08c13b1bb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-z4wzh_calico-system(6cc25f9b-f232-47b9-8c25-dc08c13b1bb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b999e587c6e4a6cbbc0186e2ff65043a9abdd3a03e22a9c36feefbfbb779411\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 18:59:51.993089 containerd[1564]: time="2026-01-23T18:59:51.993023923Z" level=error msg="Failed to destroy network for sandbox \"d5709471e037192b4719611ec6e6946658c966d8ed7787091bd7871466489635\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:51.995214 containerd[1564]: time="2026-01-23T18:59:51.995160236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c474f47cb-gsdmr,Uid:9b7d7d7b-9f3b-4806-a4e8-308622bc18c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5709471e037192b4719611ec6e6946658c966d8ed7787091bd7871466489635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:51.996205 kubelet[2800]: E0123 18:59:51.996111 2800 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5709471e037192b4719611ec6e6946658c966d8ed7787091bd7871466489635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:51.996205 kubelet[2800]: E0123 18:59:51.996181 2800 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5709471e037192b4719611ec6e6946658c966d8ed7787091bd7871466489635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" Jan 23 18:59:51.996205 kubelet[2800]: E0123 18:59:51.996203 2800 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5709471e037192b4719611ec6e6946658c966d8ed7787091bd7871466489635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" Jan 23 18:59:51.997013 kubelet[2800]: E0123 18:59:51.996277 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c474f47cb-gsdmr_calico-system(9b7d7d7b-9f3b-4806-a4e8-308622bc18c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c474f47cb-gsdmr_calico-system(9b7d7d7b-9f3b-4806-a4e8-308622bc18c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5709471e037192b4719611ec6e6946658c966d8ed7787091bd7871466489635\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 18:59:52.003814 containerd[1564]: time="2026-01-23T18:59:52.003777490Z" level=error msg="Failed to destroy network for sandbox \"2968cb2aef573e2dc73b7dca1df1bff431321c356ff3bb9606595427e58097af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.004855 containerd[1564]: time="2026-01-23T18:59:52.004825432Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf455955c-jr65c,Uid:26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2968cb2aef573e2dc73b7dca1df1bff431321c356ff3bb9606595427e58097af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.006238 kubelet[2800]: E0123 18:59:52.006197 2800 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2968cb2aef573e2dc73b7dca1df1bff431321c356ff3bb9606595427e58097af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.006288 kubelet[2800]: E0123 18:59:52.006261 2800 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2968cb2aef573e2dc73b7dca1df1bff431321c356ff3bb9606595427e58097af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" Jan 23 18:59:52.006332 kubelet[2800]: E0123 18:59:52.006292 2800 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2968cb2aef573e2dc73b7dca1df1bff431321c356ff3bb9606595427e58097af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" Jan 23 18:59:52.007252 kubelet[2800]: E0123 18:59:52.006386 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cf455955c-jr65c_calico-apiserver(26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cf455955c-jr65c_calico-apiserver(26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2968cb2aef573e2dc73b7dca1df1bff431321c356ff3bb9606595427e58097af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 18:59:52.009318 containerd[1564]: time="2026-01-23T18:59:52.009276248Z" level=error msg="Failed to destroy network for sandbox \"efa45ba73a860038e40ffa8f330a8bf4e70629f12adfe8438dbbc707d215158a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.010285 containerd[1564]: time="2026-01-23T18:59:52.010252959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84b8fb74fd-tjzsj,Uid:c3f054e1-5748-465f-ba13-4eeba844abd6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa45ba73a860038e40ffa8f330a8bf4e70629f12adfe8438dbbc707d215158a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.010872 kubelet[2800]: E0123 18:59:52.010829 2800 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa45ba73a860038e40ffa8f330a8bf4e70629f12adfe8438dbbc707d215158a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.011236 kubelet[2800]: E0123 18:59:52.011203 2800 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa45ba73a860038e40ffa8f330a8bf4e70629f12adfe8438dbbc707d215158a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84b8fb74fd-tjzsj" Jan 23 18:59:52.011328 kubelet[2800]: E0123 18:59:52.011287 2800 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa45ba73a860038e40ffa8f330a8bf4e70629f12adfe8438dbbc707d215158a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84b8fb74fd-tjzsj" Jan 23 18:59:52.011584 kubelet[2800]: E0123 18:59:52.011358 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84b8fb74fd-tjzsj_calico-system(c3f054e1-5748-465f-ba13-4eeba844abd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84b8fb74fd-tjzsj_calico-system(c3f054e1-5748-465f-ba13-4eeba844abd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efa45ba73a860038e40ffa8f330a8bf4e70629f12adfe8438dbbc707d215158a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84b8fb74fd-tjzsj" podUID="c3f054e1-5748-465f-ba13-4eeba844abd6" Jan 23 18:59:52.319809 kubelet[2800]: E0123 18:59:52.318475 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:52.322175 containerd[1564]: time="2026-01-23T18:59:52.319397565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:59:52.566157 kubelet[2800]: E0123 18:59:52.566112 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:52.567232 containerd[1564]: time="2026-01-23T18:59:52.566785527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vm7hl,Uid:c8facf2d-db59-43b4-b75d-c18e88cb697f,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:52.600815 kubelet[2800]: E0123 18:59:52.600028 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 18:59:52.603251 containerd[1564]: time="2026-01-23T18:59:52.603213732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sxppt,Uid:39fc028f-9ecf-48af-ba4d-3da48a6fc889,Namespace:kube-system,Attempt:0,}" Jan 23 18:59:52.646537 containerd[1564]: time="2026-01-23T18:59:52.646470447Z" level=error msg="Failed to destroy network for sandbox \"692dcc73cec3ff70d60c2baa9cbed232b360b437b12e8d09db672bed550a090c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.650654 containerd[1564]: time="2026-01-23T18:59:52.650416383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vm7hl,Uid:c8facf2d-db59-43b4-b75d-c18e88cb697f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"692dcc73cec3ff70d60c2baa9cbed232b360b437b12e8d09db672bed550a090c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.651548 kubelet[2800]: E0123 18:59:52.651193 2800 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692dcc73cec3ff70d60c2baa9cbed232b360b437b12e8d09db672bed550a090c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.651548 kubelet[2800]: E0123 18:59:52.651258 2800 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692dcc73cec3ff70d60c2baa9cbed232b360b437b12e8d09db672bed550a090c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vm7hl" Jan 23 18:59:52.651548 kubelet[2800]: E0123 18:59:52.651287 2800 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692dcc73cec3ff70d60c2baa9cbed232b360b437b12e8d09db672bed550a090c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vm7hl" Jan 23 18:59:52.652393 kubelet[2800]: E0123 18:59:52.651460 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vm7hl_kube-system(c8facf2d-db59-43b4-b75d-c18e88cb697f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vm7hl_kube-system(c8facf2d-db59-43b4-b75d-c18e88cb697f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"692dcc73cec3ff70d60c2baa9cbed232b360b437b12e8d09db672bed550a090c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vm7hl" podUID="c8facf2d-db59-43b4-b75d-c18e88cb697f" Jan 23 18:59:52.652075 systemd[1]: run-netns-cni\x2d73811b66\x2de977\x2ddfa7\x2d2086\x2d71d3467a4e55.mount: Deactivated successfully. Jan 23 18:59:52.703245 containerd[1564]: time="2026-01-23T18:59:52.703188333Z" level=error msg="Failed to destroy network for sandbox \"7108c45d38fbb0da3ea88e4d97ec01c4c1cc45d784e2b6e5595b807e957ec06a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.704745 containerd[1564]: time="2026-01-23T18:59:52.704713734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sxppt,Uid:39fc028f-9ecf-48af-ba4d-3da48a6fc889,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7108c45d38fbb0da3ea88e4d97ec01c4c1cc45d784e2b6e5595b807e957ec06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.706947 kubelet[2800]: E0123 18:59:52.706622 2800 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7108c45d38fbb0da3ea88e4d97ec01c4c1cc45d784e2b6e5595b807e957ec06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:52.706947 kubelet[2800]: E0123 18:59:52.706685 2800 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7108c45d38fbb0da3ea88e4d97ec01c4c1cc45d784e2b6e5595b807e957ec06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sxppt" Jan 23 18:59:52.706947 kubelet[2800]: E0123 18:59:52.706724 2800 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7108c45d38fbb0da3ea88e4d97ec01c4c1cc45d784e2b6e5595b807e957ec06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sxppt" Jan 23 18:59:52.707114 kubelet[2800]: E0123 18:59:52.706809 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sxppt_kube-system(39fc028f-9ecf-48af-ba4d-3da48a6fc889)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sxppt_kube-system(39fc028f-9ecf-48af-ba4d-3da48a6fc889)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7108c45d38fbb0da3ea88e4d97ec01c4c1cc45d784e2b6e5595b807e957ec06a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-sxppt" podUID="39fc028f-9ecf-48af-ba4d-3da48a6fc889" Jan 23 18:59:52.708073 systemd[1]: run-netns-cni\x2d4836662e\x2ddf3b\x2d3cc8\x2d1cca\x2d10efaa594ffa.mount: Deactivated successfully. Jan 23 18:59:53.090887 systemd[1]: Created slice kubepods-besteffort-pod04ac36e7_bbd5_42c4_814d_f8a86ddd8bdd.slice - libcontainer container kubepods-besteffort-pod04ac36e7_bbd5_42c4_814d_f8a86ddd8bdd.slice. Jan 23 18:59:53.101227 containerd[1564]: time="2026-01-23T18:59:53.100991373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pxmr,Uid:04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd,Namespace:calico-system,Attempt:0,}" Jan 23 18:59:54.932155 containerd[1564]: time="2026-01-23T18:59:54.931609378Z" level=error msg="Failed to destroy network for sandbox \"9964cf0b6ecce2f81103d194956007045d73600b0602c1e9ca9274df65f3a612\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:54.943994 systemd[1]: run-netns-cni\x2d1424d78d\x2df153\x2df900\x2d70f9\x2de76edf7c73f4.mount: Deactivated successfully. Jan 23 18:59:54.965177 containerd[1564]: time="2026-01-23T18:59:54.965039454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pxmr,Uid:04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9964cf0b6ecce2f81103d194956007045d73600b0602c1e9ca9274df65f3a612\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:54.966694 kubelet[2800]: E0123 18:59:54.966525 2800 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9964cf0b6ecce2f81103d194956007045d73600b0602c1e9ca9274df65f3a612\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:59:54.967948 kubelet[2800]: E0123 18:59:54.966854 2800 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9964cf0b6ecce2f81103d194956007045d73600b0602c1e9ca9274df65f3a612\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pxmr" Jan 23 18:59:54.967948 kubelet[2800]: E0123 18:59:54.966998 2800 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9964cf0b6ecce2f81103d194956007045d73600b0602c1e9ca9274df65f3a612\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7pxmr" Jan 23 18:59:54.967948 kubelet[2800]: E0123 18:59:54.967513 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9964cf0b6ecce2f81103d194956007045d73600b0602c1e9ca9274df65f3a612\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:00:00.361485 kubelet[2800]: I0123 19:00:00.359314 2800 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 19:00:00.361485 kubelet[2800]: E0123 19:00:00.360486 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:00.426612 kubelet[2800]: E0123 19:00:00.426533 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:04.204627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340538185.mount: Deactivated successfully. Jan 23 19:00:04.252801 containerd[1564]: time="2026-01-23T19:00:04.252545857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:04.253731 containerd[1564]: time="2026-01-23T19:00:04.253652859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 19:00:04.254295 containerd[1564]: time="2026-01-23T19:00:04.254252629Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:04.256474 containerd[1564]: time="2026-01-23T19:00:04.256435551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:00:04.257615 containerd[1564]: time="2026-01-23T19:00:04.257575492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.938093787s" Jan 23 19:00:04.257691 containerd[1564]: time="2026-01-23T19:00:04.257624352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 19:00:04.284148 containerd[1564]: time="2026-01-23T19:00:04.284104738Z" level=info msg="CreateContainer within sandbox \"1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 19:00:04.295307 containerd[1564]: time="2026-01-23T19:00:04.295272859Z" level=info msg="Container fc7d5474ece2b7498b9f31f364a082c89871ca6a02f30665c8c1e8b333b16de9: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:04.301493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063725243.mount: Deactivated successfully. Jan 23 19:00:04.306692 containerd[1564]: time="2026-01-23T19:00:04.306655309Z" level=info msg="CreateContainer within sandbox \"1e3d6ad0576c273e92074693ea325e785b85a36e03db6e0f93007706adac5ae1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fc7d5474ece2b7498b9f31f364a082c89871ca6a02f30665c8c1e8b333b16de9\"" Jan 23 19:00:04.308373 containerd[1564]: time="2026-01-23T19:00:04.308345811Z" level=info msg="StartContainer for \"fc7d5474ece2b7498b9f31f364a082c89871ca6a02f30665c8c1e8b333b16de9\"" Jan 23 19:00:04.310288 containerd[1564]: time="2026-01-23T19:00:04.310256623Z" level=info msg="connecting to shim fc7d5474ece2b7498b9f31f364a082c89871ca6a02f30665c8c1e8b333b16de9" address="unix:///run/containerd/s/a9104838643c71530731197dfe91e6c14eba354dc37a4a458e9674ebdf110b89" protocol=ttrpc version=3 Jan 23 19:00:04.376149 systemd[1]: Started cri-containerd-fc7d5474ece2b7498b9f31f364a082c89871ca6a02f30665c8c1e8b333b16de9.scope - libcontainer container fc7d5474ece2b7498b9f31f364a082c89871ca6a02f30665c8c1e8b333b16de9. Jan 23 19:00:04.487272 containerd[1564]: time="2026-01-23T19:00:04.487138583Z" level=info msg="StartContainer for \"fc7d5474ece2b7498b9f31f364a082c89871ca6a02f30665c8c1e8b333b16de9\" returns successfully" Jan 23 19:00:04.620044 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 19:00:04.620231 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 19:00:04.809032 kubelet[2800]: I0123 19:00:04.808571 2800 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7kmw\" (UniqueName: \"kubernetes.io/projected/c3f054e1-5748-465f-ba13-4eeba844abd6-kube-api-access-r7kmw\") pod \"c3f054e1-5748-465f-ba13-4eeba844abd6\" (UID: \"c3f054e1-5748-465f-ba13-4eeba844abd6\") " Jan 23 19:00:04.809032 kubelet[2800]: I0123 19:00:04.808633 2800 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3f054e1-5748-465f-ba13-4eeba844abd6-whisker-backend-key-pair\") pod \"c3f054e1-5748-465f-ba13-4eeba844abd6\" (UID: \"c3f054e1-5748-465f-ba13-4eeba844abd6\") " Jan 23 19:00:04.809032 kubelet[2800]: I0123 19:00:04.808663 2800 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f054e1-5748-465f-ba13-4eeba844abd6-whisker-ca-bundle\") pod \"c3f054e1-5748-465f-ba13-4eeba844abd6\" (UID: \"c3f054e1-5748-465f-ba13-4eeba844abd6\") " Jan 23 19:00:04.812172 kubelet[2800]: I0123 19:00:04.811507 2800 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f054e1-5748-465f-ba13-4eeba844abd6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c3f054e1-5748-465f-ba13-4eeba844abd6" (UID: "c3f054e1-5748-465f-ba13-4eeba844abd6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:00:04.820763 kubelet[2800]: I0123 19:00:04.820720 2800 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f054e1-5748-465f-ba13-4eeba844abd6-kube-api-access-r7kmw" (OuterVolumeSpecName: "kube-api-access-r7kmw") pod "c3f054e1-5748-465f-ba13-4eeba844abd6" (UID: "c3f054e1-5748-465f-ba13-4eeba844abd6"). InnerVolumeSpecName "kube-api-access-r7kmw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:00:04.822353 kubelet[2800]: I0123 19:00:04.822284 2800 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f054e1-5748-465f-ba13-4eeba844abd6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c3f054e1-5748-465f-ba13-4eeba844abd6" (UID: "c3f054e1-5748-465f-ba13-4eeba844abd6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 19:00:04.909746 kubelet[2800]: I0123 19:00:04.909664 2800 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3f054e1-5748-465f-ba13-4eeba844abd6-whisker-backend-key-pair\") on node \"172-238-168-154\" DevicePath \"\"" Jan 23 19:00:04.909746 kubelet[2800]: I0123 19:00:04.909709 2800 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f054e1-5748-465f-ba13-4eeba844abd6-whisker-ca-bundle\") on node \"172-238-168-154\" DevicePath \"\"" Jan 23 19:00:04.909746 kubelet[2800]: I0123 19:00:04.909740 2800 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r7kmw\" (UniqueName: \"kubernetes.io/projected/c3f054e1-5748-465f-ba13-4eeba844abd6-kube-api-access-r7kmw\") on node \"172-238-168-154\" DevicePath \"\"" Jan 23 19:00:05.083815 containerd[1564]: time="2026-01-23T19:00:05.083653594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c474f47cb-gsdmr,Uid:9b7d7d7b-9f3b-4806-a4e8-308622bc18c5,Namespace:calico-system,Attempt:0,}" Jan 23 19:00:05.112745 systemd[1]: Removed slice kubepods-besteffort-podc3f054e1_5748_465f_ba13_4eeba844abd6.slice - libcontainer container kubepods-besteffort-podc3f054e1_5748_465f_ba13_4eeba844abd6.slice. Jan 23 19:00:05.203366 systemd[1]: var-lib-kubelet-pods-c3f054e1\x2d5748\x2d465f\x2dba13\x2d4eeba844abd6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr7kmw.mount: Deactivated successfully. Jan 23 19:00:05.204143 systemd[1]: var-lib-kubelet-pods-c3f054e1\x2d5748\x2d465f\x2dba13\x2d4eeba844abd6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 19:00:05.336303 systemd-networkd[1438]: calica2f860ac2e: Link UP Jan 23 19:00:05.338541 systemd-networkd[1438]: calica2f860ac2e: Gained carrier Jan 23 19:00:05.361238 containerd[1564]: 2026-01-23 19:00:05.158 [INFO][3822] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:00:05.361238 containerd[1564]: 2026-01-23 19:00:05.227 [INFO][3822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0 calico-kube-controllers-7c474f47cb- calico-system 9b7d7d7b-9f3b-4806-a4e8-308622bc18c5 846 0 2026-01-23 18:59:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c474f47cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-238-168-154 calico-kube-controllers-7c474f47cb-gsdmr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calica2f860ac2e [] [] }} ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Namespace="calico-system" Pod="calico-kube-controllers-7c474f47cb-gsdmr" WorkloadEndpoint="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-" Jan 23 19:00:05.361238 containerd[1564]: 2026-01-23 19:00:05.228 [INFO][3822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Namespace="calico-system" Pod="calico-kube-controllers-7c474f47cb-gsdmr" WorkloadEndpoint="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" Jan 23 19:00:05.361238 containerd[1564]: 2026-01-23 19:00:05.284 [INFO][3834] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" HandleID="k8s-pod-network.90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Workload="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" Jan 23 19:00:05.361882 containerd[1564]: 2026-01-23 19:00:05.285 [INFO][3834] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" HandleID="k8s-pod-network.90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Workload="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-168-154", "pod":"calico-kube-controllers-7c474f47cb-gsdmr", "timestamp":"2026-01-23 19:00:05.284818813 +0000 UTC"}, Hostname:"172-238-168-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:00:05.361882 containerd[1564]: 2026-01-23 19:00:05.285 [INFO][3834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:00:05.361882 containerd[1564]: 2026-01-23 19:00:05.285 [INFO][3834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:00:05.361882 containerd[1564]: 2026-01-23 19:00:05.285 [INFO][3834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-168-154' Jan 23 19:00:05.361882 containerd[1564]: 2026-01-23 19:00:05.293 [INFO][3834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" host="172-238-168-154" Jan 23 19:00:05.361882 containerd[1564]: 2026-01-23 19:00:05.299 [INFO][3834] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-168-154" Jan 23 19:00:05.361882 containerd[1564]: 2026-01-23 19:00:05.303 [INFO][3834] ipam/ipam.go 511: Trying affinity for 192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:05.361882 containerd[1564]: 2026-01-23 19:00:05.305 [INFO][3834] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:05.361882 containerd[1564]: 2026-01-23 19:00:05.307 [INFO][3834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:05.362225 containerd[1564]: 2026-01-23 19:00:05.307 [INFO][3834] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" host="172-238-168-154" Jan 23 19:00:05.362225 containerd[1564]: 2026-01-23 19:00:05.308 [INFO][3834] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73 Jan 23 19:00:05.362225 containerd[1564]: 2026-01-23 19:00:05.312 [INFO][3834] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" host="172-238-168-154" Jan 23 19:00:05.362225 containerd[1564]: 2026-01-23 19:00:05.318 [INFO][3834] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.193/26] block=192.168.102.192/26 handle="k8s-pod-network.90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" host="172-238-168-154" Jan 23 19:00:05.362225 containerd[1564]: 2026-01-23 19:00:05.318 [INFO][3834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.193/26] handle="k8s-pod-network.90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" host="172-238-168-154" Jan 23 19:00:05.362225 containerd[1564]: 2026-01-23 19:00:05.318 [INFO][3834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:00:05.362225 containerd[1564]: 2026-01-23 19:00:05.318 [INFO][3834] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.193/26] IPv6=[] ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" HandleID="k8s-pod-network.90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Workload="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" Jan 23 19:00:05.362444 containerd[1564]: 2026-01-23 19:00:05.322 [INFO][3822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Namespace="calico-system" Pod="calico-kube-controllers-7c474f47cb-gsdmr" WorkloadEndpoint="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0", GenerateName:"calico-kube-controllers-7c474f47cb-", Namespace:"calico-system", SelfLink:"", UID:"9b7d7d7b-9f3b-4806-a4e8-308622bc18c5", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c474f47cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"", Pod:"calico-kube-controllers-7c474f47cb-gsdmr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calica2f860ac2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:05.362509 containerd[1564]: 2026-01-23 19:00:05.323 [INFO][3822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.193/32] ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Namespace="calico-system" Pod="calico-kube-controllers-7c474f47cb-gsdmr" WorkloadEndpoint="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" Jan 23 19:00:05.362509 containerd[1564]: 2026-01-23 19:00:05.323 [INFO][3822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica2f860ac2e ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Namespace="calico-system" Pod="calico-kube-controllers-7c474f47cb-gsdmr" WorkloadEndpoint="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" Jan 23 19:00:05.362509 containerd[1564]: 2026-01-23 19:00:05.339 [INFO][3822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Namespace="calico-system" Pod="calico-kube-controllers-7c474f47cb-gsdmr" WorkloadEndpoint="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" Jan 23 19:00:05.362669 containerd[1564]: 2026-01-23 19:00:05.340 [INFO][3822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Namespace="calico-system" Pod="calico-kube-controllers-7c474f47cb-gsdmr" WorkloadEndpoint="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0", GenerateName:"calico-kube-controllers-7c474f47cb-", Namespace:"calico-system", SelfLink:"", UID:"9b7d7d7b-9f3b-4806-a4e8-308622bc18c5", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c474f47cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73", Pod:"calico-kube-controllers-7c474f47cb-gsdmr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calica2f860ac2e", MAC:"fa:e1:ad:0e:97:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:05.362730 containerd[1564]: 2026-01-23 19:00:05.357 [INFO][3822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" Namespace="calico-system" Pod="calico-kube-controllers-7c474f47cb-gsdmr" WorkloadEndpoint="172--238--168--154-k8s-calico--kube--controllers--7c474f47cb--gsdmr-eth0" Jan 23 19:00:05.417740 containerd[1564]: time="2026-01-23T19:00:05.417121316Z" level=info msg="connecting to shim 90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73" address="unix:///run/containerd/s/18ed797137a6cf9ef8c9519371261c1a93d63d77854aeae8ef7dd0ab37f75684" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:05.450063 systemd[1]: Started cri-containerd-90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73.scope - libcontainer container 90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73. Jan 23 19:00:05.497812 kubelet[2800]: E0123 19:00:05.497768 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:05.552932 kubelet[2800]: I0123 19:00:05.551427 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ns4ld" podStartSLOduration=2.547648618 podStartE2EDuration="28.551392121s" podCreationTimestamp="2026-01-23 18:59:37 +0000 UTC" firstStartedPulling="2026-01-23 18:59:38.25498955 +0000 UTC m=+25.433884311" lastFinishedPulling="2026-01-23 19:00:04.258733053 +0000 UTC m=+51.437627814" observedRunningTime="2026-01-23 19:00:05.545500296 +0000 UTC m=+52.724395057" watchObservedRunningTime="2026-01-23 19:00:05.551392121 +0000 UTC m=+52.730286882" Jan 23 19:00:05.560509 containerd[1564]: time="2026-01-23T19:00:05.560456970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c474f47cb-gsdmr,Uid:9b7d7d7b-9f3b-4806-a4e8-308622bc18c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"90ff771b5025bbd2504198b3f4f41093d3e92690b125bcd7427e5dc8c3106b73\"" Jan 23 19:00:05.565743 containerd[1564]: time="2026-01-23T19:00:05.565486404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:00:05.651041 systemd[1]: Created slice kubepods-besteffort-pod27b7f510_5fb2_464f_a554_4a5af21f95ed.slice - libcontainer container kubepods-besteffort-pod27b7f510_5fb2_464f_a554_4a5af21f95ed.slice. Jan 23 19:00:05.720498 kubelet[2800]: I0123 19:00:05.720414 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/27b7f510-5fb2-464f-a554-4a5af21f95ed-whisker-backend-key-pair\") pod \"whisker-76c4546688-v7z24\" (UID: \"27b7f510-5fb2-464f-a554-4a5af21f95ed\") " pod="calico-system/whisker-76c4546688-v7z24" Jan 23 19:00:05.720498 kubelet[2800]: I0123 19:00:05.720484 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gph2x\" (UniqueName: \"kubernetes.io/projected/27b7f510-5fb2-464f-a554-4a5af21f95ed-kube-api-access-gph2x\") pod \"whisker-76c4546688-v7z24\" (UID: \"27b7f510-5fb2-464f-a554-4a5af21f95ed\") " pod="calico-system/whisker-76c4546688-v7z24" Jan 23 19:00:05.720498 kubelet[2800]: I0123 19:00:05.720505 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27b7f510-5fb2-464f-a554-4a5af21f95ed-whisker-ca-bundle\") pod \"whisker-76c4546688-v7z24\" (UID: \"27b7f510-5fb2-464f-a554-4a5af21f95ed\") " pod="calico-system/whisker-76c4546688-v7z24" Jan 23 19:00:05.960675 containerd[1564]: time="2026-01-23T19:00:05.960390764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c4546688-v7z24,Uid:27b7f510-5fb2-464f-a554-4a5af21f95ed,Namespace:calico-system,Attempt:0,}" Jan 23 19:00:06.079691 containerd[1564]: time="2026-01-23T19:00:06.079647972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:06.080470 containerd[1564]: time="2026-01-23T19:00:06.079803822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-z4wzh,Uid:6cc25f9b-f232-47b9-8c25-dc08c13b1bb7,Namespace:calico-system,Attempt:0,}" Jan 23 19:00:06.081640 kubelet[2800]: E0123 19:00:06.081131 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:06.082194 containerd[1564]: time="2026-01-23T19:00:06.082150464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vm7hl,Uid:c8facf2d-db59-43b4-b75d-c18e88cb697f,Namespace:kube-system,Attempt:0,}" Jan 23 19:00:06.083566 containerd[1564]: time="2026-01-23T19:00:06.083359666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:00:06.083566 containerd[1564]: time="2026-01-23T19:00:06.083479206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:00:06.083635 containerd[1564]: time="2026-01-23T19:00:06.083585396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf455955c-czmzf,Uid:53afd191-0189-457d-b022-3c4e010c308d,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:00:06.084475 kubelet[2800]: E0123 19:00:06.083882 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:00:06.084475 kubelet[2800]: E0123 19:00:06.084355 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:00:06.084850 kubelet[2800]: E0123 19:00:06.084649 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c474f47cb-gsdmr_calico-system(9b7d7d7b-9f3b-4806-a4e8-308622bc18c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:06.085064 kubelet[2800]: E0123 19:00:06.085025 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:00:06.266611 systemd-networkd[1438]: cali8ccff62d7bf: Link UP Jan 23 19:00:06.274828 systemd-networkd[1438]: cali8ccff62d7bf: Gained carrier Jan 23 19:00:06.361362 containerd[1564]: 2026-01-23 19:00:06.014 [INFO][3953] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:00:06.361362 containerd[1564]: 2026-01-23 19:00:06.039 [INFO][3953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0 whisker-76c4546688- calico-system 27b7f510-5fb2-464f-a554-4a5af21f95ed 943 0 2026-01-23 19:00:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76c4546688 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-238-168-154 whisker-76c4546688-v7z24 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8ccff62d7bf [] [] }} ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Namespace="calico-system" Pod="whisker-76c4546688-v7z24" WorkloadEndpoint="172--238--168--154-k8s-whisker--76c4546688--v7z24-" Jan 23 19:00:06.361362 containerd[1564]: 2026-01-23 19:00:06.039 [INFO][3953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Namespace="calico-system" Pod="whisker-76c4546688-v7z24" WorkloadEndpoint="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" Jan 23 19:00:06.361362 containerd[1564]: 2026-01-23 19:00:06.092 [INFO][3965] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" HandleID="k8s-pod-network.ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Workload="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" Jan 23 19:00:06.362030 containerd[1564]: 2026-01-23 19:00:06.094 [INFO][3965] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" HandleID="k8s-pod-network.ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Workload="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-168-154", "pod":"whisker-76c4546688-v7z24", "timestamp":"2026-01-23 19:00:06.092171683 +0000 UTC"}, Hostname:"172-238-168-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:00:06.362030 containerd[1564]: 2026-01-23 19:00:06.095 [INFO][3965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:00:06.362030 containerd[1564]: 2026-01-23 19:00:06.095 [INFO][3965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:00:06.362030 containerd[1564]: 2026-01-23 19:00:06.095 [INFO][3965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-168-154' Jan 23 19:00:06.362030 containerd[1564]: 2026-01-23 19:00:06.116 [INFO][3965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" host="172-238-168-154" Jan 23 19:00:06.362030 containerd[1564]: 2026-01-23 19:00:06.164 [INFO][3965] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-168-154" Jan 23 19:00:06.362030 containerd[1564]: 2026-01-23 19:00:06.187 [INFO][3965] ipam/ipam.go 511: Trying affinity for 192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.362030 containerd[1564]: 2026-01-23 19:00:06.192 [INFO][3965] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.362030 containerd[1564]: 2026-01-23 19:00:06.203 [INFO][3965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.362300 containerd[1564]: 2026-01-23 19:00:06.203 [INFO][3965] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" host="172-238-168-154" Jan 23 19:00:06.362300 containerd[1564]: 2026-01-23 19:00:06.214 [INFO][3965] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533 Jan 23 19:00:06.362300 containerd[1564]: 2026-01-23 19:00:06.236 [INFO][3965] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" host="172-238-168-154" Jan 23 19:00:06.362300 containerd[1564]: 2026-01-23 19:00:06.250 [INFO][3965] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.194/26] block=192.168.102.192/26 handle="k8s-pod-network.ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" host="172-238-168-154" Jan 23 19:00:06.362300 containerd[1564]: 2026-01-23 19:00:06.250 [INFO][3965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.194/26] handle="k8s-pod-network.ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" host="172-238-168-154" Jan 23 19:00:06.362300 containerd[1564]: 2026-01-23 19:00:06.250 [INFO][3965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:00:06.362300 containerd[1564]: 2026-01-23 19:00:06.250 [INFO][3965] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.194/26] IPv6=[] ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" HandleID="k8s-pod-network.ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Workload="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" Jan 23 19:00:06.362468 containerd[1564]: 2026-01-23 19:00:06.258 [INFO][3953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Namespace="calico-system" Pod="whisker-76c4546688-v7z24" WorkloadEndpoint="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0", GenerateName:"whisker-76c4546688-", Namespace:"calico-system", SelfLink:"", UID:"27b7f510-5fb2-464f-a554-4a5af21f95ed", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76c4546688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"", Pod:"whisker-76c4546688-v7z24", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ccff62d7bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:06.362468 containerd[1564]: 2026-01-23 19:00:06.258 [INFO][3953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.194/32] ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Namespace="calico-system" Pod="whisker-76c4546688-v7z24" WorkloadEndpoint="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" Jan 23 19:00:06.362547 containerd[1564]: 2026-01-23 19:00:06.259 [INFO][3953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ccff62d7bf ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Namespace="calico-system" Pod="whisker-76c4546688-v7z24" WorkloadEndpoint="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" Jan 23 19:00:06.362547 containerd[1564]: 2026-01-23 19:00:06.273 [INFO][3953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Namespace="calico-system" Pod="whisker-76c4546688-v7z24" WorkloadEndpoint="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" Jan 23 19:00:06.362614 containerd[1564]: 2026-01-23 19:00:06.277 [INFO][3953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Namespace="calico-system" Pod="whisker-76c4546688-v7z24" WorkloadEndpoint="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0", GenerateName:"whisker-76c4546688-", Namespace:"calico-system", SelfLink:"", UID:"27b7f510-5fb2-464f-a554-4a5af21f95ed", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 0, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76c4546688", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533", Pod:"whisker-76c4546688-v7z24", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ccff62d7bf", MAC:"66:81:04:cb:60:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:06.362673 containerd[1564]: 2026-01-23 19:00:06.343 [INFO][3953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" Namespace="calico-system" Pod="whisker-76c4546688-v7z24" WorkloadEndpoint="172--238--168--154-k8s-whisker--76c4546688--v7z24-eth0" Jan 23 19:00:06.438600 containerd[1564]: time="2026-01-23T19:00:06.438504397Z" level=info msg="connecting to shim ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533" address="unix:///run/containerd/s/b7f63fb614aed1df38fea60079caf045b0107702b359e27e9d0c0cfe477f9c30" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:06.461033 systemd-networkd[1438]: cali7724dad6083: Link UP Jan 23 19:00:06.461304 systemd-networkd[1438]: cali7724dad6083: Gained carrier Jan 23 19:00:06.517648 kubelet[2800]: E0123 19:00:06.517511 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:06.524714 kubelet[2800]: E0123 19:00:06.523771 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:00:06.537252 systemd[1]: Started cri-containerd-ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533.scope - libcontainer container ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533. Jan 23 19:00:06.544087 containerd[1564]: 2026-01-23 19:00:06.173 [INFO][3984] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:00:06.544087 containerd[1564]: 2026-01-23 19:00:06.211 [INFO][3984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0 coredns-66bc5c9577- kube-system c8facf2d-db59-43b4-b75d-c18e88cb697f 849 0 2026-01-23 18:59:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-168-154 coredns-66bc5c9577-vm7hl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7724dad6083 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Namespace="kube-system" Pod="coredns-66bc5c9577-vm7hl" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-" Jan 23 19:00:06.544087 containerd[1564]: 2026-01-23 19:00:06.211 [INFO][3984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Namespace="kube-system" Pod="coredns-66bc5c9577-vm7hl" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" Jan 23 19:00:06.544087 containerd[1564]: 2026-01-23 19:00:06.315 [INFO][4018] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" HandleID="k8s-pod-network.644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Workload="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" Jan 23 19:00:06.544316 containerd[1564]: 2026-01-23 19:00:06.315 [INFO][4018] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" HandleID="k8s-pod-network.644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Workload="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd950), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-168-154", "pod":"coredns-66bc5c9577-vm7hl", "timestamp":"2026-01-23 19:00:06.315272216 +0000 UTC"}, Hostname:"172-238-168-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:00:06.544316 containerd[1564]: 2026-01-23 19:00:06.315 [INFO][4018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:00:06.544316 containerd[1564]: 2026-01-23 19:00:06.315 [INFO][4018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:00:06.544316 containerd[1564]: 2026-01-23 19:00:06.315 [INFO][4018] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-168-154' Jan 23 19:00:06.544316 containerd[1564]: 2026-01-23 19:00:06.349 [INFO][4018] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" host="172-238-168-154" Jan 23 19:00:06.544316 containerd[1564]: 2026-01-23 19:00:06.365 [INFO][4018] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-168-154" Jan 23 19:00:06.544316 containerd[1564]: 2026-01-23 19:00:06.379 [INFO][4018] ipam/ipam.go 511: Trying affinity for 192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.544316 containerd[1564]: 2026-01-23 19:00:06.389 [INFO][4018] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.544316 containerd[1564]: 2026-01-23 19:00:06.403 [INFO][4018] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.544627 containerd[1564]: 2026-01-23 19:00:06.403 [INFO][4018] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" host="172-238-168-154" Jan 23 19:00:06.544627 containerd[1564]: 2026-01-23 19:00:06.405 [INFO][4018] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c Jan 23 19:00:06.544627 containerd[1564]: 2026-01-23 19:00:06.413 [INFO][4018] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" host="172-238-168-154" Jan 23 19:00:06.544627 containerd[1564]: 2026-01-23 19:00:06.439 [INFO][4018] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.195/26] block=192.168.102.192/26 handle="k8s-pod-network.644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" host="172-238-168-154" Jan 23 19:00:06.544627 containerd[1564]: 2026-01-23 19:00:06.439 [INFO][4018] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.195/26] handle="k8s-pod-network.644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" host="172-238-168-154" Jan 23 19:00:06.544627 containerd[1564]: 2026-01-23 19:00:06.439 [INFO][4018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:00:06.544627 containerd[1564]: 2026-01-23 19:00:06.440 [INFO][4018] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.195/26] IPv6=[] ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" HandleID="k8s-pod-network.644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Workload="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" Jan 23 19:00:06.544811 containerd[1564]: 2026-01-23 19:00:06.444 [INFO][3984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Namespace="kube-system" Pod="coredns-66bc5c9577-vm7hl" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c8facf2d-db59-43b4-b75d-c18e88cb697f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"", Pod:"coredns-66bc5c9577-vm7hl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7724dad6083", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:06.544811 containerd[1564]: 2026-01-23 19:00:06.446 [INFO][3984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.195/32] ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Namespace="kube-system" Pod="coredns-66bc5c9577-vm7hl" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" Jan 23 19:00:06.544811 containerd[1564]: 2026-01-23 19:00:06.448 [INFO][3984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7724dad6083 ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Namespace="kube-system" Pod="coredns-66bc5c9577-vm7hl" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" Jan 23 19:00:06.544811 containerd[1564]: 2026-01-23 19:00:06.463 [INFO][3984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Namespace="kube-system" Pod="coredns-66bc5c9577-vm7hl" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" Jan 23 19:00:06.544811 containerd[1564]: 2026-01-23 19:00:06.464 [INFO][3984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Namespace="kube-system" Pod="coredns-66bc5c9577-vm7hl" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c8facf2d-db59-43b4-b75d-c18e88cb697f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c", Pod:"coredns-66bc5c9577-vm7hl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7724dad6083", MAC:"56:8e:8a:df:a6:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:06.544811 containerd[1564]: 2026-01-23 19:00:06.506 [INFO][3984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" Namespace="kube-system" Pod="coredns-66bc5c9577-vm7hl" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--vm7hl-eth0" Jan 23 19:00:06.653483 systemd-networkd[1438]: cali5c318e693f4: Link UP Jan 23 19:00:06.656325 systemd-networkd[1438]: cali5c318e693f4: Gained carrier Jan 23 19:00:06.675037 containerd[1564]: time="2026-01-23T19:00:06.674356881Z" level=info msg="connecting to shim 644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c" address="unix:///run/containerd/s/44aba8edbe88550b46d533fcb457f2e245864d8394f0de7c13aae0a6f0f66b2f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.167 [INFO][3973] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.209 [INFO][3973] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0 goldmane-7c778bb748- calico-system 6cc25f9b-f232-47b9-8c25-dc08c13b1bb7 851 0 2026-01-23 18:59:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-238-168-154 goldmane-7c778bb748-z4wzh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5c318e693f4 [] [] }} ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Namespace="calico-system" Pod="goldmane-7c778bb748-z4wzh" WorkloadEndpoint="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.209 [INFO][3973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Namespace="calico-system" Pod="goldmane-7c778bb748-z4wzh" WorkloadEndpoint="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.336 [INFO][4014] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" HandleID="k8s-pod-network.c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Workload="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.336 [INFO][4014] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" HandleID="k8s-pod-network.c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Workload="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103a10), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-168-154", "pod":"goldmane-7c778bb748-z4wzh", "timestamp":"2026-01-23 19:00:06.336562516 +0000 UTC"}, Hostname:"172-238-168-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.337 [INFO][4014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.440 [INFO][4014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.440 [INFO][4014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-168-154' Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.456 [INFO][4014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" host="172-238-168-154" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.482 [INFO][4014] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-168-154" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.541 [INFO][4014] ipam/ipam.go 511: Trying affinity for 192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.561 [INFO][4014] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.567 [INFO][4014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.567 [INFO][4014] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" host="172-238-168-154" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.572 [INFO][4014] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.600 [INFO][4014] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" host="172-238-168-154" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.618 [INFO][4014] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.196/26] block=192.168.102.192/26 handle="k8s-pod-network.c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" host="172-238-168-154" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.619 [INFO][4014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.196/26] handle="k8s-pod-network.c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" host="172-238-168-154" Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.620 [INFO][4014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:00:06.703787 containerd[1564]: 2026-01-23 19:00:06.621 [INFO][4014] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.196/26] IPv6=[] ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" HandleID="k8s-pod-network.c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Workload="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" Jan 23 19:00:06.704631 containerd[1564]: 2026-01-23 19:00:06.634 [INFO][3973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Namespace="calico-system" Pod="goldmane-7c778bb748-z4wzh" WorkloadEndpoint="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6cc25f9b-f232-47b9-8c25-dc08c13b1bb7", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"", Pod:"goldmane-7c778bb748-z4wzh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5c318e693f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:06.704631 containerd[1564]: 2026-01-23 19:00:06.634 [INFO][3973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.196/32] ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Namespace="calico-system" Pod="goldmane-7c778bb748-z4wzh" WorkloadEndpoint="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" Jan 23 19:00:06.704631 containerd[1564]: 2026-01-23 19:00:06.634 [INFO][3973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c318e693f4 ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Namespace="calico-system" Pod="goldmane-7c778bb748-z4wzh" WorkloadEndpoint="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" Jan 23 19:00:06.704631 containerd[1564]: 2026-01-23 19:00:06.656 [INFO][3973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Namespace="calico-system" Pod="goldmane-7c778bb748-z4wzh" WorkloadEndpoint="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" Jan 23 19:00:06.704631 containerd[1564]: 2026-01-23 19:00:06.662 [INFO][3973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Namespace="calico-system" Pod="goldmane-7c778bb748-z4wzh" WorkloadEndpoint="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6cc25f9b-f232-47b9-8c25-dc08c13b1bb7", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a", Pod:"goldmane-7c778bb748-z4wzh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5c318e693f4", MAC:"3e:98:f9:38:54:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:06.704631 containerd[1564]: 2026-01-23 19:00:06.680 [INFO][3973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" Namespace="calico-system" Pod="goldmane-7c778bb748-z4wzh" WorkloadEndpoint="172--238--168--154-k8s-goldmane--7c778bb748--z4wzh-eth0" Jan 23 19:00:06.720167 containerd[1564]: time="2026-01-23T19:00:06.720111663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76c4546688-v7z24,Uid:27b7f510-5fb2-464f-a554-4a5af21f95ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab5dbcf1d92cef093d721ad2f6f71c6cd4899ee96c3072c8b2b3bb2e6906d533\"" Jan 23 19:00:06.725062 containerd[1564]: time="2026-01-23T19:00:06.725028458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:00:06.766697 systemd-networkd[1438]: calie547244bff2: Link UP Jan 23 19:00:06.774575 systemd-networkd[1438]: calie547244bff2: Gained carrier Jan 23 19:00:06.837046 containerd[1564]: time="2026-01-23T19:00:06.836525858Z" level=info msg="connecting to shim c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a" address="unix:///run/containerd/s/ea163f05ce9615d91f36d3972ddb57dce96c2d48f578ee222d2bfd271587e9c9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.173 [INFO][3990] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.213 [INFO][3990] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0 calico-apiserver-7cf455955c- calico-apiserver 53afd191-0189-457d-b022-3c4e010c308d 848 0 2026-01-23 18:59:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf455955c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-168-154 calico-apiserver-7cf455955c-czmzf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie547244bff2 [] [] }} ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-czmzf" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.213 [INFO][3990] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-czmzf" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.395 [INFO][4016] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" HandleID="k8s-pod-network.924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Workload="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.398 [INFO][4016] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" HandleID="k8s-pod-network.924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Workload="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-238-168-154", "pod":"calico-apiserver-7cf455955c-czmzf", "timestamp":"2026-01-23 19:00:06.395417519 +0000 UTC"}, Hostname:"172-238-168-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.398 [INFO][4016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.621 [INFO][4016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.621 [INFO][4016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-168-154' Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.640 [INFO][4016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" host="172-238-168-154" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.667 [INFO][4016] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-168-154" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.693 [INFO][4016] ipam/ipam.go 511: Trying affinity for 192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.702 [INFO][4016] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.709 [INFO][4016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.710 [INFO][4016] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" host="172-238-168-154" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.714 [INFO][4016] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8 Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.724 [INFO][4016] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" host="172-238-168-154" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.737 [INFO][4016] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.197/26] block=192.168.102.192/26 handle="k8s-pod-network.924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" host="172-238-168-154" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.737 [INFO][4016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.197/26] handle="k8s-pod-network.924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" host="172-238-168-154" Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.737 [INFO][4016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:00:06.838243 containerd[1564]: 2026-01-23 19:00:06.737 [INFO][4016] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.197/26] IPv6=[] ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" HandleID="k8s-pod-network.924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Workload="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" Jan 23 19:00:06.838886 containerd[1564]: 2026-01-23 19:00:06.748 [INFO][3990] cni-plugin/k8s.go 418: Populated endpoint ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-czmzf" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0", GenerateName:"calico-apiserver-7cf455955c-", Namespace:"calico-apiserver", SelfLink:"", UID:"53afd191-0189-457d-b022-3c4e010c308d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf455955c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"", Pod:"calico-apiserver-7cf455955c-czmzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie547244bff2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:06.838886 containerd[1564]: 2026-01-23 19:00:06.748 [INFO][3990] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.197/32] ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-czmzf" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" Jan 23 19:00:06.838886 containerd[1564]: 2026-01-23 19:00:06.748 [INFO][3990] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie547244bff2 ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-czmzf" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" Jan 23 19:00:06.838886 containerd[1564]: 2026-01-23 19:00:06.778 [INFO][3990] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-czmzf" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" Jan 23 19:00:06.838886 containerd[1564]: 2026-01-23 19:00:06.778 [INFO][3990] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-czmzf" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0", GenerateName:"calico-apiserver-7cf455955c-", Namespace:"calico-apiserver", SelfLink:"", UID:"53afd191-0189-457d-b022-3c4e010c308d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf455955c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8", Pod:"calico-apiserver-7cf455955c-czmzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie547244bff2", MAC:"1e:6e:b4:8f:0a:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:06.838886 containerd[1564]: 2026-01-23 19:00:06.819 [INFO][3990] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-czmzf" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--czmzf-eth0" Jan 23 19:00:06.873090 systemd[1]: Started cri-containerd-644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c.scope - libcontainer container 644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c. Jan 23 19:00:06.914307 containerd[1564]: time="2026-01-23T19:00:06.914251329Z" level=info msg="connecting to shim 924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8" address="unix:///run/containerd/s/de7a81ff9ec3bfe64cad1a0ee7fba9fc5c6b9d047439ecb0a0009ee50d5f9f20" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:06.945512 systemd[1]: Started cri-containerd-c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a.scope - libcontainer container c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a. Jan 23 19:00:07.011108 systemd[1]: Started cri-containerd-924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8.scope - libcontainer container 924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8. Jan 23 19:00:07.076251 containerd[1564]: time="2026-01-23T19:00:07.075586314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vm7hl,Uid:c8facf2d-db59-43b4-b75d-c18e88cb697f,Namespace:kube-system,Attempt:0,} returns sandbox id \"644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c\"" Jan 23 19:00:07.081028 kubelet[2800]: E0123 19:00:07.080864 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:07.084532 kubelet[2800]: E0123 19:00:07.084468 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:07.085578 containerd[1564]: time="2026-01-23T19:00:07.085540952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sxppt,Uid:39fc028f-9ecf-48af-ba4d-3da48a6fc889,Namespace:kube-system,Attempt:0,}" Jan 23 19:00:07.086691 kubelet[2800]: I0123 19:00:07.086652 2800 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f054e1-5748-465f-ba13-4eeba844abd6" path="/var/lib/kubelet/pods/c3f054e1-5748-465f-ba13-4eeba844abd6/volumes" Jan 23 19:00:07.088931 containerd[1564]: time="2026-01-23T19:00:07.088875455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf455955c-jr65c,Uid:26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:00:07.101467 systemd-networkd[1438]: calica2f860ac2e: Gained IPv6LL Jan 23 19:00:07.107792 containerd[1564]: time="2026-01-23T19:00:07.107672122Z" level=info msg="CreateContainer within sandbox \"644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:00:07.138663 containerd[1564]: time="2026-01-23T19:00:07.136383637Z" level=info msg="Container 6e92f95a1f2a7ced4bd990c64cc153db06d18f4179fa0887aa5e7b72fbcdcb72: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:07.144949 containerd[1564]: time="2026-01-23T19:00:07.144879044Z" level=info msg="CreateContainer within sandbox \"644a96b2900178b2df14648bfa4dfa2f1c387848c5c7ccea003a84fad5adcb8c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e92f95a1f2a7ced4bd990c64cc153db06d18f4179fa0887aa5e7b72fbcdcb72\"" Jan 23 19:00:07.148082 containerd[1564]: time="2026-01-23T19:00:07.147798387Z" level=info msg="StartContainer for \"6e92f95a1f2a7ced4bd990c64cc153db06d18f4179fa0887aa5e7b72fbcdcb72\"" Jan 23 19:00:07.155015 containerd[1564]: time="2026-01-23T19:00:07.153968113Z" level=info msg="connecting to shim 6e92f95a1f2a7ced4bd990c64cc153db06d18f4179fa0887aa5e7b72fbcdcb72" address="unix:///run/containerd/s/44aba8edbe88550b46d533fcb457f2e245864d8394f0de7c13aae0a6f0f66b2f" protocol=ttrpc version=3 Jan 23 19:00:07.263182 systemd[1]: Started cri-containerd-6e92f95a1f2a7ced4bd990c64cc153db06d18f4179fa0887aa5e7b72fbcdcb72.scope - libcontainer container 6e92f95a1f2a7ced4bd990c64cc153db06d18f4179fa0887aa5e7b72fbcdcb72. Jan 23 19:00:07.352053 containerd[1564]: time="2026-01-23T19:00:07.351646937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-z4wzh,Uid:6cc25f9b-f232-47b9-8c25-dc08c13b1bb7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c115a3ee176e02cd846473b2f4f2add0e326cae22aee3b9c11d7e9ee17f88e2a\"" Jan 23 19:00:07.444849 containerd[1564]: time="2026-01-23T19:00:07.444812128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf455955c-czmzf,Uid:53afd191-0189-457d-b022-3c4e010c308d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"924540eb16f83d21559271d63a1b15b9b3d988b09dc87bbb702533b6701253a8\"" Jan 23 19:00:07.486176 systemd-networkd[1438]: cali8ccff62d7bf: Gained IPv6LL Jan 23 19:00:07.510749 containerd[1564]: time="2026-01-23T19:00:07.510457116Z" level=info msg="StartContainer for \"6e92f95a1f2a7ced4bd990c64cc153db06d18f4179fa0887aa5e7b72fbcdcb72\" returns successfully" Jan 23 19:00:07.528390 kubelet[2800]: E0123 19:00:07.528348 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:07.539559 kubelet[2800]: E0123 19:00:07.539491 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:00:07.598590 systemd-networkd[1438]: cali9865d0558ac: Link UP Jan 23 19:00:07.598831 systemd-networkd[1438]: cali9865d0558ac: Gained carrier Jan 23 19:00:07.670817 kubelet[2800]: I0123 19:00:07.670750 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vm7hl" podStartSLOduration=50.670734108 podStartE2EDuration="50.670734108s" podCreationTimestamp="2026-01-23 18:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:00:07.592771019 +0000 UTC m=+54.771665780" watchObservedRunningTime="2026-01-23 19:00:07.670734108 +0000 UTC m=+54.849628869" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.355 [INFO][4332] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.380 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0 coredns-66bc5c9577- kube-system 39fc028f-9ecf-48af-ba4d-3da48a6fc889 841 0 2026-01-23 18:59:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-168-154 coredns-66bc5c9577-sxppt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9865d0558ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Namespace="kube-system" Pod="coredns-66bc5c9577-sxppt" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.380 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Namespace="kube-system" Pod="coredns-66bc5c9577-sxppt" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.469 [INFO][4384] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" HandleID="k8s-pod-network.083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Workload="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.469 [INFO][4384] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" HandleID="k8s-pod-network.083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Workload="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001dcfd0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-168-154", "pod":"coredns-66bc5c9577-sxppt", "timestamp":"2026-01-23 19:00:07.46911482 +0000 UTC"}, Hostname:"172-238-168-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.469 [INFO][4384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.469 [INFO][4384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.469 [INFO][4384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-168-154' Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.483 [INFO][4384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" host="172-238-168-154" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.501 [INFO][4384] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-168-154" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.509 [INFO][4384] ipam/ipam.go 511: Trying affinity for 192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.513 [INFO][4384] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.522 [INFO][4384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.522 [INFO][4384] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" host="172-238-168-154" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.525 [INFO][4384] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8 Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.531 [INFO][4384] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" host="172-238-168-154" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.585 [INFO][4384] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.198/26] block=192.168.102.192/26 handle="k8s-pod-network.083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" host="172-238-168-154" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.585 [INFO][4384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.198/26] handle="k8s-pod-network.083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" host="172-238-168-154" Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.585 [INFO][4384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:00:07.672788 containerd[1564]: 2026-01-23 19:00:07.585 [INFO][4384] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.198/26] IPv6=[] ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" HandleID="k8s-pod-network.083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Workload="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" Jan 23 19:00:07.673416 containerd[1564]: 2026-01-23 19:00:07.592 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Namespace="kube-system" Pod="coredns-66bc5c9577-sxppt" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"39fc028f-9ecf-48af-ba4d-3da48a6fc889", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"", Pod:"coredns-66bc5c9577-sxppt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9865d0558ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:07.673416 containerd[1564]: 2026-01-23 19:00:07.592 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.198/32] ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Namespace="kube-system" Pod="coredns-66bc5c9577-sxppt" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" Jan 23 19:00:07.673416 containerd[1564]: 2026-01-23 19:00:07.592 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9865d0558ac ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Namespace="kube-system" Pod="coredns-66bc5c9577-sxppt" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" Jan 23 19:00:07.673416 containerd[1564]: 2026-01-23 19:00:07.597 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Namespace="kube-system" Pod="coredns-66bc5c9577-sxppt" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" Jan 23 19:00:07.673416 containerd[1564]: 2026-01-23 19:00:07.605 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Namespace="kube-system" Pod="coredns-66bc5c9577-sxppt" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"39fc028f-9ecf-48af-ba4d-3da48a6fc889", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8", Pod:"coredns-66bc5c9577-sxppt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9865d0558ac", MAC:"16:4c:47:7f:dc:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:07.673416 containerd[1564]: 2026-01-23 19:00:07.669 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" Namespace="kube-system" Pod="coredns-66bc5c9577-sxppt" WorkloadEndpoint="172--238--168--154-k8s-coredns--66bc5c9577--sxppt-eth0" Jan 23 19:00:07.741138 systemd-networkd[1438]: cali7724dad6083: Gained IPv6LL Jan 23 19:00:07.742408 containerd[1564]: time="2026-01-23T19:00:07.741505731Z" level=info msg="connecting to shim 083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8" address="unix:///run/containerd/s/728d860f88facc2a406ebb3ebd7dbc58a28146d372bc6b6770df800f88711ef1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:07.786492 systemd-networkd[1438]: cali802bb088744: Link UP Jan 23 19:00:07.788954 systemd-networkd[1438]: cali802bb088744: Gained carrier Jan 23 19:00:07.807760 systemd-networkd[1438]: cali5c318e693f4: Gained IPv6LL Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.423 [INFO][4333] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.458 [INFO][4333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0 calico-apiserver-7cf455955c- calico-apiserver 26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab 852 0 2026-01-23 18:59:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cf455955c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-168-154 calico-apiserver-7cf455955c-jr65c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali802bb088744 [] [] }} ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-jr65c" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.459 [INFO][4333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-jr65c" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.580 [INFO][4409] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" HandleID="k8s-pod-network.ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Workload="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.580 [INFO][4409] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" HandleID="k8s-pod-network.ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Workload="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003636f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-238-168-154", "pod":"calico-apiserver-7cf455955c-jr65c", "timestamp":"2026-01-23 19:00:07.580338048 +0000 UTC"}, Hostname:"172-238-168-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.580 [INFO][4409] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.585 [INFO][4409] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.585 [INFO][4409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-168-154' Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.632 [INFO][4409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" host="172-238-168-154" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.678 [INFO][4409] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-168-154" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.686 [INFO][4409] ipam/ipam.go 511: Trying affinity for 192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.690 [INFO][4409] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.693 [INFO][4409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.694 [INFO][4409] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" host="172-238-168-154" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.695 [INFO][4409] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7 Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.701 [INFO][4409] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" host="172-238-168-154" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.738 [INFO][4409] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.199/26] block=192.168.102.192/26 handle="k8s-pod-network.ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" host="172-238-168-154" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.743 [INFO][4409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.199/26] handle="k8s-pod-network.ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" host="172-238-168-154" Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.743 [INFO][4409] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:00:07.830613 containerd[1564]: 2026-01-23 19:00:07.743 [INFO][4409] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.199/26] IPv6=[] ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" HandleID="k8s-pod-network.ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Workload="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" Jan 23 19:00:07.833381 containerd[1564]: 2026-01-23 19:00:07.769 [INFO][4333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-jr65c" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0", GenerateName:"calico-apiserver-7cf455955c-", Namespace:"calico-apiserver", SelfLink:"", UID:"26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf455955c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"", Pod:"calico-apiserver-7cf455955c-jr65c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali802bb088744", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:07.833381 containerd[1564]: 2026-01-23 19:00:07.770 [INFO][4333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.199/32] ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-jr65c" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" Jan 23 19:00:07.833381 containerd[1564]: 2026-01-23 19:00:07.770 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali802bb088744 ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-jr65c" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" Jan 23 19:00:07.833381 containerd[1564]: 2026-01-23 19:00:07.791 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-jr65c" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" Jan 23 19:00:07.833381 containerd[1564]: 2026-01-23 19:00:07.792 [INFO][4333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-jr65c" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0", GenerateName:"calico-apiserver-7cf455955c-", Namespace:"calico-apiserver", SelfLink:"", UID:"26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cf455955c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7", Pod:"calico-apiserver-7cf455955c-jr65c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali802bb088744", MAC:"72:42:33:3a:1b:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:07.833381 containerd[1564]: 2026-01-23 19:00:07.818 [INFO][4333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" Namespace="calico-apiserver" Pod="calico-apiserver-7cf455955c-jr65c" WorkloadEndpoint="172--238--168--154-k8s-calico--apiserver--7cf455955c--jr65c-eth0" Jan 23 19:00:07.836093 systemd[1]: Started cri-containerd-083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8.scope - libcontainer container 083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8. Jan 23 19:00:07.892617 containerd[1564]: time="2026-01-23T19:00:07.892552063Z" level=info msg="connecting to shim ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7" address="unix:///run/containerd/s/204f812b611cc7290d835b63706b040c91a97812762d375af6bf200a3a5887c0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:07.959519 systemd[1]: Started cri-containerd-ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7.scope - libcontainer container ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7. Jan 23 19:00:08.030295 containerd[1564]: time="2026-01-23T19:00:08.030241604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sxppt,Uid:39fc028f-9ecf-48af-ba4d-3da48a6fc889,Namespace:kube-system,Attempt:0,} returns sandbox id \"083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8\"" Jan 23 19:00:08.033929 kubelet[2800]: E0123 19:00:08.033846 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:08.041346 containerd[1564]: time="2026-01-23T19:00:08.041312133Z" level=info msg="CreateContainer within sandbox \"083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:00:08.052368 containerd[1564]: time="2026-01-23T19:00:08.052330843Z" level=info msg="Container 133e879a3d2b51029de38d2072b61d79b3521e08079a129750ff1300e50a25b6: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:00:08.058355 containerd[1564]: time="2026-01-23T19:00:08.058302008Z" level=info msg="CreateContainer within sandbox \"083504576df9193f6790b5d32d34fe70fb08ad5feea46da369b1d3c767cfedb8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"133e879a3d2b51029de38d2072b61d79b3521e08079a129750ff1300e50a25b6\"" Jan 23 19:00:08.061490 containerd[1564]: time="2026-01-23T19:00:08.061426660Z" level=info msg="StartContainer for \"133e879a3d2b51029de38d2072b61d79b3521e08079a129750ff1300e50a25b6\"" Jan 23 19:00:08.066039 containerd[1564]: time="2026-01-23T19:00:08.065998835Z" level=info msg="connecting to shim 133e879a3d2b51029de38d2072b61d79b3521e08079a129750ff1300e50a25b6" address="unix:///run/containerd/s/728d860f88facc2a406ebb3ebd7dbc58a28146d372bc6b6770df800f88711ef1" protocol=ttrpc version=3 Jan 23 19:00:08.137983 containerd[1564]: time="2026-01-23T19:00:08.137893336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cf455955c-jr65c,Uid:26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ed62ee7c1453168710bbad52f2ab029afd716ac2f555c74548cc80566480e3a7\"" Jan 23 19:00:08.163074 systemd[1]: Started cri-containerd-133e879a3d2b51029de38d2072b61d79b3521e08079a129750ff1300e50a25b6.scope - libcontainer container 133e879a3d2b51029de38d2072b61d79b3521e08079a129750ff1300e50a25b6. Jan 23 19:00:08.248669 containerd[1564]: time="2026-01-23T19:00:08.248462901Z" level=info msg="StartContainer for \"133e879a3d2b51029de38d2072b61d79b3521e08079a129750ff1300e50a25b6\" returns successfully" Jan 23 19:00:08.445202 systemd-networkd[1438]: calie547244bff2: Gained IPv6LL Jan 23 19:00:08.549446 kubelet[2800]: E0123 19:00:08.548350 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:08.551753 kubelet[2800]: E0123 19:00:08.550113 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:08.591169 kubelet[2800]: I0123 19:00:08.591027 2800 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sxppt" podStartSLOduration=51.591012024 podStartE2EDuration="51.591012024s" podCreationTimestamp="2026-01-23 18:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:00:08.572821619 +0000 UTC m=+55.751716400" watchObservedRunningTime="2026-01-23 19:00:08.591012024 +0000 UTC m=+55.769906785" Jan 23 19:00:08.778878 systemd-networkd[1438]: vxlan.calico: Link UP Jan 23 19:00:08.778889 systemd-networkd[1438]: vxlan.calico: Gained carrier Jan 23 19:00:08.831006 systemd-networkd[1438]: cali9865d0558ac: Gained IPv6LL Jan 23 19:00:08.957172 systemd-networkd[1438]: cali802bb088744: Gained IPv6LL Jan 23 19:00:09.082449 containerd[1564]: time="2026-01-23T19:00:09.082369783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pxmr,Uid:04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd,Namespace:calico-system,Attempt:0,}" Jan 23 19:00:09.261578 systemd-networkd[1438]: calif971ce804a8: Link UP Jan 23 19:00:09.262215 systemd-networkd[1438]: calif971ce804a8: Gained carrier Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.164 [INFO][4638] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--168--154-k8s-csi--node--driver--7pxmr-eth0 csi-node-driver- calico-system 04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd 729 0 2026-01-23 18:59:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-238-168-154 csi-node-driver-7pxmr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif971ce804a8 [] [] }} ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Namespace="calico-system" Pod="csi-node-driver-7pxmr" WorkloadEndpoint="172--238--168--154-k8s-csi--node--driver--7pxmr-" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.164 [INFO][4638] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Namespace="calico-system" Pod="csi-node-driver-7pxmr" WorkloadEndpoint="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.201 [INFO][4651] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" HandleID="k8s-pod-network.0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Workload="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.202 [INFO][4651] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" HandleID="k8s-pod-network.0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Workload="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5840), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-168-154", "pod":"csi-node-driver-7pxmr", "timestamp":"2026-01-23 19:00:09.201988743 +0000 UTC"}, Hostname:"172-238-168-154", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.202 [INFO][4651] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.202 [INFO][4651] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.202 [INFO][4651] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-168-154' Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.211 [INFO][4651] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" host="172-238-168-154" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.216 [INFO][4651] ipam/ipam.go 394: Looking up existing affinities for host host="172-238-168-154" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.222 [INFO][4651] ipam/ipam.go 511: Trying affinity for 192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.223 [INFO][4651] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.227 [INFO][4651] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.192/26 host="172-238-168-154" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.227 [INFO][4651] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.102.192/26 handle="k8s-pod-network.0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" host="172-238-168-154" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.230 [INFO][4651] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.234 [INFO][4651] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.102.192/26 handle="k8s-pod-network.0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" host="172-238-168-154" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.249 [INFO][4651] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.102.200/26] block=192.168.102.192/26 handle="k8s-pod-network.0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" host="172-238-168-154" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.249 [INFO][4651] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.200/26] handle="k8s-pod-network.0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" host="172-238-168-154" Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.249 [INFO][4651] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:00:09.285274 containerd[1564]: 2026-01-23 19:00:09.249 [INFO][4651] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.102.200/26] IPv6=[] ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" HandleID="k8s-pod-network.0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Workload="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" Jan 23 19:00:09.285875 containerd[1564]: 2026-01-23 19:00:09.252 [INFO][4638] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Namespace="calico-system" Pod="csi-node-driver-7pxmr" WorkloadEndpoint="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-csi--node--driver--7pxmr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"", Pod:"csi-node-driver-7pxmr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif971ce804a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:09.285875 containerd[1564]: 2026-01-23 19:00:09.252 [INFO][4638] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.200/32] ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Namespace="calico-system" Pod="csi-node-driver-7pxmr" WorkloadEndpoint="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" Jan 23 19:00:09.285875 containerd[1564]: 2026-01-23 19:00:09.252 [INFO][4638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif971ce804a8 ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Namespace="calico-system" Pod="csi-node-driver-7pxmr" WorkloadEndpoint="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" Jan 23 19:00:09.285875 containerd[1564]: 2026-01-23 19:00:09.264 [INFO][4638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Namespace="calico-system" Pod="csi-node-driver-7pxmr" WorkloadEndpoint="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" Jan 23 19:00:09.285875 containerd[1564]: 2026-01-23 19:00:09.265 [INFO][4638] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Namespace="calico-system" Pod="csi-node-driver-7pxmr" WorkloadEndpoint="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--168--154-k8s-csi--node--driver--7pxmr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 59, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-168-154", ContainerID:"0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b", Pod:"csi-node-driver-7pxmr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif971ce804a8", MAC:"76:94:47:dc:8f:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:00:09.285875 containerd[1564]: 2026-01-23 19:00:09.282 [INFO][4638] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" Namespace="calico-system" Pod="csi-node-driver-7pxmr" WorkloadEndpoint="172--238--168--154-k8s-csi--node--driver--7pxmr-eth0" Jan 23 19:00:09.330582 containerd[1564]: time="2026-01-23T19:00:09.330479030Z" level=info msg="connecting to shim 0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b" address="unix:///run/containerd/s/694a4d054e3a27ef0d547198981bd4b31d091f080f14ec20e72dc3c7954c36da" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:00:09.367035 systemd[1]: Started cri-containerd-0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b.scope - libcontainer container 0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b. Jan 23 19:00:09.423444 containerd[1564]: time="2026-01-23T19:00:09.423406609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7pxmr,Uid:04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd,Namespace:calico-system,Attempt:0,} returns sandbox id \"0aaae52e8c7260b8effd5a0670bbb60ce802c7cb3d535de7e9628fb26699136b\"" Jan 23 19:00:09.552204 kubelet[2800]: E0123 19:00:09.552095 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:09.553160 kubelet[2800]: E0123 19:00:09.553135 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:09.981646 systemd-networkd[1438]: vxlan.calico: Gained IPv6LL Jan 23 19:00:10.555316 kubelet[2800]: E0123 19:00:10.554955 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:10.773198 containerd[1564]: time="2026-01-23T19:00:10.773115699Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:10.774589 containerd[1564]: time="2026-01-23T19:00:10.774486300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:00:10.774589 containerd[1564]: time="2026-01-23T19:00:10.774527200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:00:10.777652 kubelet[2800]: E0123 19:00:10.777581 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:00:10.777841 kubelet[2800]: E0123 19:00:10.777669 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:00:10.778033 kubelet[2800]: E0123 19:00:10.778002 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-76c4546688-v7z24_calico-system(27b7f510-5fb2-464f-a554-4a5af21f95ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:10.779697 containerd[1564]: time="2026-01-23T19:00:10.779311634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:00:10.942806 containerd[1564]: time="2026-01-23T19:00:10.942754709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:10.943443 containerd[1564]: time="2026-01-23T19:00:10.943401494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:00:10.943496 containerd[1564]: time="2026-01-23T19:00:10.943474925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:10.943706 kubelet[2800]: E0123 19:00:10.943642 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:00:10.943706 kubelet[2800]: E0123 19:00:10.943693 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:00:10.944137 containerd[1564]: time="2026-01-23T19:00:10.944002400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:00:10.944393 kubelet[2800]: E0123 19:00:10.944324 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-z4wzh_calico-system(6cc25f9b-f232-47b9-8c25-dc08c13b1bb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:10.944567 kubelet[2800]: E0123 19:00:10.944492 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:00:11.197822 systemd-networkd[1438]: calif971ce804a8: Gained IPv6LL Jan 23 19:00:11.454718 containerd[1564]: time="2026-01-23T19:00:11.454535258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:11.456248 containerd[1564]: time="2026-01-23T19:00:11.456089582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:00:11.456335 containerd[1564]: time="2026-01-23T19:00:11.456142306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:11.456682 kubelet[2800]: E0123 19:00:11.456635 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:11.456744 kubelet[2800]: E0123 19:00:11.456696 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:11.457040 kubelet[2800]: E0123 19:00:11.456991 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cf455955c-czmzf_calico-apiserver(53afd191-0189-457d-b022-3c4e010c308d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:11.457231 kubelet[2800]: E0123 19:00:11.457064 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:00:11.457835 containerd[1564]: time="2026-01-23T19:00:11.457586283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:00:11.558516 kubelet[2800]: E0123 19:00:11.558370 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:00:11.559687 kubelet[2800]: E0123 19:00:11.558724 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:00:12.510203 containerd[1564]: time="2026-01-23T19:00:12.509987241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:12.513033 containerd[1564]: time="2026-01-23T19:00:12.512424330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:00:12.513033 containerd[1564]: time="2026-01-23T19:00:12.512977506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:12.514827 kubelet[2800]: E0123 19:00:12.514007 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:12.514827 kubelet[2800]: E0123 19:00:12.514078 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:12.514827 kubelet[2800]: E0123 19:00:12.514434 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cf455955c-jr65c_calico-apiserver(26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:12.514827 kubelet[2800]: E0123 19:00:12.514488 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:00:12.517345 containerd[1564]: time="2026-01-23T19:00:12.517314809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:00:12.563798 kubelet[2800]: E0123 19:00:12.563707 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:00:12.667657 containerd[1564]: time="2026-01-23T19:00:12.667583137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:12.668722 containerd[1564]: time="2026-01-23T19:00:12.668685419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:00:12.668892 containerd[1564]: time="2026-01-23T19:00:12.668798566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:00:12.669192 kubelet[2800]: E0123 19:00:12.669056 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:00:12.669192 kubelet[2800]: E0123 19:00:12.669121 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:00:12.669584 kubelet[2800]: E0123 19:00:12.669542 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:12.670643 containerd[1564]: time="2026-01-23T19:00:12.670255052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:00:13.188189 containerd[1564]: time="2026-01-23T19:00:13.188128149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:13.189128 containerd[1564]: time="2026-01-23T19:00:13.188998244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:00:13.189128 containerd[1564]: time="2026-01-23T19:00:13.189063958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:00:13.189678 kubelet[2800]: E0123 19:00:13.189402 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:00:13.189678 kubelet[2800]: E0123 19:00:13.189484 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:00:13.189678 kubelet[2800]: E0123 19:00:13.189664 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-76c4546688-v7z24_calico-system(27b7f510-5fb2-464f-a554-4a5af21f95ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:13.190057 kubelet[2800]: E0123 19:00:13.189970 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:00:13.190211 containerd[1564]: time="2026-01-23T19:00:13.189887441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:00:13.339935 containerd[1564]: time="2026-01-23T19:00:13.339859491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:13.341104 containerd[1564]: time="2026-01-23T19:00:13.340958470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:00:13.341104 containerd[1564]: time="2026-01-23T19:00:13.340983212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:00:13.341430 kubelet[2800]: E0123 19:00:13.341371 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:00:13.341496 kubelet[2800]: E0123 19:00:13.341453 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:00:13.341911 kubelet[2800]: E0123 19:00:13.341574 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:13.342027 kubelet[2800]: E0123 19:00:13.341919 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:00:13.569301 kubelet[2800]: E0123 19:00:13.569129 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:00:13.574007 kubelet[2800]: E0123 19:00:13.573962 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:00:20.079541 containerd[1564]: time="2026-01-23T19:00:20.079381429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:00:21.578583 containerd[1564]: time="2026-01-23T19:00:21.578500356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:21.580079 containerd[1564]: time="2026-01-23T19:00:21.579859156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:00:21.580079 containerd[1564]: time="2026-01-23T19:00:21.579891188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:00:21.580420 kubelet[2800]: E0123 19:00:21.580307 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:00:21.580952 kubelet[2800]: E0123 19:00:21.580449 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:00:21.580952 kubelet[2800]: E0123 19:00:21.580636 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c474f47cb-gsdmr_calico-system(9b7d7d7b-9f3b-4806-a4e8-308622bc18c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:21.580952 kubelet[2800]: E0123 19:00:21.580729 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:00:23.078342 kubelet[2800]: E0123 19:00:23.078290 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:24.080737 containerd[1564]: time="2026-01-23T19:00:24.080677300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:00:24.226154 containerd[1564]: time="2026-01-23T19:00:24.226074628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:24.227397 containerd[1564]: time="2026-01-23T19:00:24.227354228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:00:24.227551 containerd[1564]: time="2026-01-23T19:00:24.227382239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:24.227708 kubelet[2800]: E0123 19:00:24.227657 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:00:24.228384 kubelet[2800]: E0123 19:00:24.227715 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:00:24.228384 kubelet[2800]: E0123 19:00:24.227965 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-z4wzh_calico-system(6cc25f9b-f232-47b9-8c25-dc08c13b1bb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:24.228943 containerd[1564]: time="2026-01-23T19:00:24.228684001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:00:24.229013 kubelet[2800]: E0123 19:00:24.228805 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:00:24.363108 containerd[1564]: time="2026-01-23T19:00:24.362951324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:24.364282 containerd[1564]: time="2026-01-23T19:00:24.364247676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:00:24.364449 containerd[1564]: time="2026-01-23T19:00:24.364427404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:24.364685 kubelet[2800]: E0123 19:00:24.364612 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:24.364800 kubelet[2800]: E0123 19:00:24.364682 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:24.364880 kubelet[2800]: E0123 19:00:24.364817 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cf455955c-jr65c_calico-apiserver(26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:24.364932 kubelet[2800]: E0123 19:00:24.364859 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:00:25.077640 kubelet[2800]: E0123 19:00:25.077050 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:26.079847 containerd[1564]: time="2026-01-23T19:00:26.079785858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:00:26.767036 containerd[1564]: time="2026-01-23T19:00:26.766954304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:26.768353 containerd[1564]: time="2026-01-23T19:00:26.768294775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:00:26.768542 containerd[1564]: time="2026-01-23T19:00:26.768318556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:26.768650 kubelet[2800]: E0123 19:00:26.768602 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:26.769085 kubelet[2800]: E0123 19:00:26.768657 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:26.769085 kubelet[2800]: E0123 19:00:26.768839 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cf455955c-czmzf_calico-apiserver(53afd191-0189-457d-b022-3c4e010c308d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:26.769085 kubelet[2800]: E0123 19:00:26.768884 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:00:26.769745 containerd[1564]: time="2026-01-23T19:00:26.769705798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:00:26.932338 containerd[1564]: time="2026-01-23T19:00:26.932238961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:26.933320 containerd[1564]: time="2026-01-23T19:00:26.933265736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:00:26.933374 containerd[1564]: time="2026-01-23T19:00:26.933363221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:00:26.933543 kubelet[2800]: E0123 19:00:26.933506 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:00:26.933628 kubelet[2800]: E0123 19:00:26.933554 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:00:26.933720 kubelet[2800]: E0123 19:00:26.933681 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-76c4546688-v7z24_calico-system(27b7f510-5fb2-464f-a554-4a5af21f95ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:26.935157 containerd[1564]: time="2026-01-23T19:00:26.935015095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:00:27.075645 containerd[1564]: time="2026-01-23T19:00:27.075473575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:27.092110 containerd[1564]: time="2026-01-23T19:00:27.091981525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:00:27.093043 containerd[1564]: time="2026-01-23T19:00:27.092034497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:00:27.093079 kubelet[2800]: E0123 19:00:27.092826 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:00:27.093079 kubelet[2800]: E0123 19:00:27.092925 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:00:27.093228 kubelet[2800]: E0123 19:00:27.093131 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-76c4546688-v7z24_calico-system(27b7f510-5fb2-464f-a554-4a5af21f95ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:27.093296 kubelet[2800]: E0123 19:00:27.093259 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:00:27.095196 containerd[1564]: time="2026-01-23T19:00:27.095030337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:00:27.249981 containerd[1564]: time="2026-01-23T19:00:27.249878759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:27.260154 containerd[1564]: time="2026-01-23T19:00:27.260070973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:00:27.260647 containerd[1564]: time="2026-01-23T19:00:27.260167637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:00:27.260823 kubelet[2800]: E0123 19:00:27.260698 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:00:27.260963 kubelet[2800]: E0123 19:00:27.260852 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:00:27.261414 kubelet[2800]: E0123 19:00:27.261069 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:27.263074 containerd[1564]: time="2026-01-23T19:00:27.263024321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:00:27.502628 containerd[1564]: time="2026-01-23T19:00:27.502493787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:27.520170 containerd[1564]: time="2026-01-23T19:00:27.520086053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:00:27.520376 containerd[1564]: time="2026-01-23T19:00:27.520108074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:00:27.520885 kubelet[2800]: E0123 19:00:27.520785 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:00:27.521092 kubelet[2800]: E0123 19:00:27.520931 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:00:27.521307 kubelet[2800]: E0123 19:00:27.521231 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:27.521423 kubelet[2800]: E0123 19:00:27.521307 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:00:33.077790 kubelet[2800]: E0123 19:00:33.077257 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:36.696513 kubelet[2800]: E0123 19:00:36.696373 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:37.080492 kubelet[2800]: E0123 19:00:37.077963 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:00:37.081034 kubelet[2800]: E0123 19:00:37.080961 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:00:38.079519 kubelet[2800]: E0123 19:00:38.079466 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:00:39.086995 kubelet[2800]: E0123 19:00:39.086597 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:00:39.088291 kubelet[2800]: E0123 19:00:39.087745 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:00:39.088291 kubelet[2800]: E0123 19:00:39.088221 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:00:43.087930 kubelet[2800]: E0123 19:00:43.087270 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:00:51.095332 containerd[1564]: time="2026-01-23T19:00:51.093765243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:00:51.298465 containerd[1564]: time="2026-01-23T19:00:51.298373643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:51.302582 containerd[1564]: time="2026-01-23T19:00:51.302401592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:00:51.302808 containerd[1564]: time="2026-01-23T19:00:51.302678369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:00:51.303418 kubelet[2800]: E0123 19:00:51.303294 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:00:51.305710 kubelet[2800]: E0123 19:00:51.305056 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:00:51.308132 kubelet[2800]: E0123 19:00:51.306227 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-76c4546688-v7z24_calico-system(27b7f510-5fb2-464f-a554-4a5af21f95ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:51.308232 containerd[1564]: time="2026-01-23T19:00:51.307148908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:00:51.485257 containerd[1564]: time="2026-01-23T19:00:51.485165676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:51.486616 containerd[1564]: time="2026-01-23T19:00:51.486558060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:00:51.486698 containerd[1564]: time="2026-01-23T19:00:51.486661373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:51.487149 kubelet[2800]: E0123 19:00:51.487063 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:51.487240 kubelet[2800]: E0123 19:00:51.487186 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:51.487923 kubelet[2800]: E0123 19:00:51.487364 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cf455955c-czmzf_calico-apiserver(53afd191-0189-457d-b022-3c4e010c308d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:51.487923 kubelet[2800]: E0123 19:00:51.487525 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:00:51.489613 containerd[1564]: time="2026-01-23T19:00:51.489558824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:00:51.697594 containerd[1564]: time="2026-01-23T19:00:51.697452304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:51.698877 containerd[1564]: time="2026-01-23T19:00:51.698791217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:00:51.699010 containerd[1564]: time="2026-01-23T19:00:51.698970872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:00:51.699408 kubelet[2800]: E0123 19:00:51.699331 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:00:51.699531 kubelet[2800]: E0123 19:00:51.699418 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:00:51.699670 kubelet[2800]: E0123 19:00:51.699539 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-76c4546688-v7z24_calico-system(27b7f510-5fb2-464f-a554-4a5af21f95ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:51.699786 kubelet[2800]: E0123 19:00:51.699730 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:00:52.082153 containerd[1564]: time="2026-01-23T19:00:52.081784052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:00:52.261087 containerd[1564]: time="2026-01-23T19:00:52.261027467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:52.263507 containerd[1564]: time="2026-01-23T19:00:52.263355502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:00:52.263507 containerd[1564]: time="2026-01-23T19:00:52.263448045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:00:52.263670 kubelet[2800]: E0123 19:00:52.263617 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:00:52.263670 kubelet[2800]: E0123 19:00:52.263680 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:00:52.263837 kubelet[2800]: E0123 19:00:52.263772 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c474f47cb-gsdmr_calico-system(9b7d7d7b-9f3b-4806-a4e8-308622bc18c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:52.263837 kubelet[2800]: E0123 19:00:52.263808 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:00:53.096049 containerd[1564]: time="2026-01-23T19:00:53.095985890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:00:53.246536 containerd[1564]: time="2026-01-23T19:00:53.246477719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:53.249462 containerd[1564]: time="2026-01-23T19:00:53.249358927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:00:53.249527 containerd[1564]: time="2026-01-23T19:00:53.249416698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:53.249827 kubelet[2800]: E0123 19:00:53.249781 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:53.250322 kubelet[2800]: E0123 19:00:53.249847 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:00:53.250322 kubelet[2800]: E0123 19:00:53.249946 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cf455955c-jr65c_calico-apiserver(26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:53.250322 kubelet[2800]: E0123 19:00:53.250004 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:00:54.080806 containerd[1564]: time="2026-01-23T19:00:54.080355506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:00:54.262639 containerd[1564]: time="2026-01-23T19:00:54.262570831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:54.267589 containerd[1564]: time="2026-01-23T19:00:54.267527416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:00:54.267691 containerd[1564]: time="2026-01-23T19:00:54.267635859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:54.267921 kubelet[2800]: E0123 19:00:54.267857 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:00:54.268291 kubelet[2800]: E0123 19:00:54.267937 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:00:54.268291 kubelet[2800]: E0123 19:00:54.268030 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-z4wzh_calico-system(6cc25f9b-f232-47b9-8c25-dc08c13b1bb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:54.268291 kubelet[2800]: E0123 19:00:54.268062 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:00:56.079102 containerd[1564]: time="2026-01-23T19:00:56.079048499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:00:56.215212 containerd[1564]: time="2026-01-23T19:00:56.214957582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:56.216588 containerd[1564]: time="2026-01-23T19:00:56.216521956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:00:56.216840 containerd[1564]: time="2026-01-23T19:00:56.216619028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:00:56.217022 kubelet[2800]: E0123 19:00:56.216875 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:00:56.218029 kubelet[2800]: E0123 19:00:56.217014 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:00:56.218029 kubelet[2800]: E0123 19:00:56.217165 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:56.218860 containerd[1564]: time="2026-01-23T19:00:56.218834668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:00:56.349638 containerd[1564]: time="2026-01-23T19:00:56.348526852Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:56.350947 containerd[1564]: time="2026-01-23T19:00:56.350840813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:00:56.351427 containerd[1564]: time="2026-01-23T19:00:56.350891755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:00:56.352114 kubelet[2800]: E0123 19:00:56.352066 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:00:56.352228 kubelet[2800]: E0123 19:00:56.352123 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:00:56.352228 kubelet[2800]: E0123 19:00:56.352208 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:56.352346 kubelet[2800]: E0123 19:00:56.352255 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:01:05.086718 kubelet[2800]: E0123 19:01:05.086565 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:01:07.083247 kubelet[2800]: E0123 19:01:07.082469 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:01:07.085949 kubelet[2800]: E0123 19:01:07.082532 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:01:07.085949 kubelet[2800]: E0123 19:01:07.085577 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:01:09.084043 kubelet[2800]: E0123 19:01:09.083468 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:01:10.080278 kubelet[2800]: E0123 19:01:10.080049 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:01:17.082419 kubelet[2800]: E0123 19:01:17.082209 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:01:20.083081 kubelet[2800]: E0123 19:01:20.082378 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:01:20.083081 kubelet[2800]: E0123 19:01:20.082841 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:01:21.082652 kubelet[2800]: E0123 19:01:21.082470 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:01:22.080650 kubelet[2800]: E0123 19:01:22.080502 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:01:23.090957 kubelet[2800]: E0123 19:01:23.089710 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:01:26.077489 kubelet[2800]: E0123 19:01:26.077413 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:01:29.079277 kubelet[2800]: E0123 19:01:29.079219 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:01:30.078272 kubelet[2800]: E0123 19:01:30.078188 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:01:32.079598 kubelet[2800]: E0123 19:01:32.079505 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:01:32.331235 systemd[1]: Started sshd@9-172.238.168.154:22-68.220.241.50:41244.service - OpenSSH per-connection server daemon (68.220.241.50:41244). Jan 23 19:01:32.536838 sshd[4870]: Accepted publickey for core from 68.220.241.50 port 41244 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:01:32.539653 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:01:32.554114 systemd-logind[1533]: New session 10 of user core. Jan 23 19:01:32.560753 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 19:01:32.811245 sshd[4873]: Connection closed by 68.220.241.50 port 41244 Jan 23 19:01:32.812219 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Jan 23 19:01:32.820121 systemd[1]: sshd@9-172.238.168.154:22-68.220.241.50:41244.service: Deactivated successfully. Jan 23 19:01:32.822792 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 19:01:32.824856 systemd-logind[1533]: Session 10 logged out. Waiting for processes to exit. Jan 23 19:01:32.827198 systemd-logind[1533]: Removed session 10. Jan 23 19:01:33.080778 containerd[1564]: time="2026-01-23T19:01:33.080354885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:01:33.296608 containerd[1564]: time="2026-01-23T19:01:33.296334558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:01:33.298278 containerd[1564]: time="2026-01-23T19:01:33.298124630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:01:33.298278 containerd[1564]: time="2026-01-23T19:01:33.298246742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:01:33.299435 kubelet[2800]: E0123 19:01:33.298967 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:01:33.301105 kubelet[2800]: E0123 19:01:33.300020 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:01:33.301105 kubelet[2800]: E0123 19:01:33.300146 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cf455955c-czmzf_calico-apiserver(53afd191-0189-457d-b022-3c4e010c308d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:01:33.301105 kubelet[2800]: E0123 19:01:33.300188 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:01:35.082933 containerd[1564]: time="2026-01-23T19:01:35.082008840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:01:35.232424 containerd[1564]: time="2026-01-23T19:01:35.232201934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:01:35.234517 containerd[1564]: time="2026-01-23T19:01:35.234351080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:01:35.234517 containerd[1564]: time="2026-01-23T19:01:35.234472192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:01:35.234765 kubelet[2800]: E0123 19:01:35.234702 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:01:35.235184 kubelet[2800]: E0123 19:01:35.234770 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:01:35.235184 kubelet[2800]: E0123 19:01:35.234963 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-76c4546688-v7z24_calico-system(27b7f510-5fb2-464f-a554-4a5af21f95ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:01:35.237252 containerd[1564]: time="2026-01-23T19:01:35.237179725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:01:35.396853 containerd[1564]: time="2026-01-23T19:01:35.396231105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:01:35.397652 containerd[1564]: time="2026-01-23T19:01:35.397416720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:01:35.397739 containerd[1564]: time="2026-01-23T19:01:35.397710663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:01:35.398188 kubelet[2800]: E0123 19:01:35.398138 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:01:35.398315 kubelet[2800]: E0123 19:01:35.398186 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:01:35.398418 kubelet[2800]: E0123 19:01:35.398356 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cf455955c-jr65c_calico-apiserver(26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:01:35.398458 kubelet[2800]: E0123 19:01:35.398407 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:01:35.400024 containerd[1564]: time="2026-01-23T19:01:35.399962880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:01:35.546916 containerd[1564]: time="2026-01-23T19:01:35.546829316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:01:35.547792 containerd[1564]: time="2026-01-23T19:01:35.547748326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:01:35.547891 containerd[1564]: time="2026-01-23T19:01:35.547842117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:01:35.548186 kubelet[2800]: E0123 19:01:35.548128 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:01:35.548186 kubelet[2800]: E0123 19:01:35.548191 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:01:35.548479 kubelet[2800]: E0123 19:01:35.548446 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c474f47cb-gsdmr_calico-system(9b7d7d7b-9f3b-4806-a4e8-308622bc18c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:01:35.548644 kubelet[2800]: E0123 19:01:35.548485 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:01:35.549093 containerd[1564]: time="2026-01-23T19:01:35.549057982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:01:35.698626 containerd[1564]: time="2026-01-23T19:01:35.698008322Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:01:35.699110 containerd[1564]: time="2026-01-23T19:01:35.698985894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:01:35.699778 containerd[1564]: time="2026-01-23T19:01:35.699117216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:01:35.699885 kubelet[2800]: E0123 19:01:35.699298 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:01:35.699885 kubelet[2800]: E0123 19:01:35.699544 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:01:35.700210 kubelet[2800]: E0123 19:01:35.699964 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-76c4546688-v7z24_calico-system(27b7f510-5fb2-464f-a554-4a5af21f95ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:01:35.700210 kubelet[2800]: E0123 19:01:35.700146 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:01:35.731027 update_engine[1536]: I20260123 19:01:35.729794 1536 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 19:01:35.731027 update_engine[1536]: I20260123 19:01:35.729986 1536 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 19:01:35.731027 update_engine[1536]: I20260123 19:01:35.730454 1536 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 19:01:35.731919 update_engine[1536]: I20260123 19:01:35.731873 1536 omaha_request_params.cc:62] Current group set to stable Jan 23 19:01:35.732202 update_engine[1536]: I20260123 19:01:35.732180 1536 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 19:01:35.732287 update_engine[1536]: I20260123 19:01:35.732262 1536 update_attempter.cc:643] Scheduling an action processor start. Jan 23 19:01:35.732374 update_engine[1536]: I20260123 19:01:35.732349 1536 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 19:01:35.732530 update_engine[1536]: I20260123 19:01:35.732512 1536 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 19:01:35.732671 update_engine[1536]: I20260123 19:01:35.732649 1536 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 19:01:35.732721 update_engine[1536]: I20260123 19:01:35.732705 1536 omaha_request_action.cc:272] Request: Jan 23 19:01:35.732721 update_engine[1536]: Jan 23 19:01:35.732721 update_engine[1536]: Jan 23 19:01:35.732721 update_engine[1536]: Jan 23 19:01:35.732721 update_engine[1536]: Jan 23 19:01:35.732721 update_engine[1536]: Jan 23 19:01:35.732721 update_engine[1536]: Jan 23 19:01:35.732721 update_engine[1536]: Jan 23 19:01:35.732721 update_engine[1536]: Jan 23 19:01:35.733725 update_engine[1536]: I20260123 19:01:35.732954 1536 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:01:35.738992 update_engine[1536]: I20260123 19:01:35.737963 1536 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:01:35.740731 update_engine[1536]: I20260123 19:01:35.740577 1536 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:01:35.742724 locksmithd[1585]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 19:01:35.753297 update_engine[1536]: E20260123 19:01:35.753057 1536 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:01:35.753297 update_engine[1536]: I20260123 19:01:35.753259 1536 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 19:01:37.850134 systemd[1]: Started sshd@10-172.238.168.154:22-68.220.241.50:50554.service - OpenSSH per-connection server daemon (68.220.241.50:50554). Jan 23 19:01:38.052607 sshd[4911]: Accepted publickey for core from 68.220.241.50 port 50554 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:01:38.054720 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:01:38.062166 systemd-logind[1533]: New session 11 of user core. Jan 23 19:01:38.069165 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 19:01:38.079186 kubelet[2800]: E0123 19:01:38.079137 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:01:38.296054 sshd[4914]: Connection closed by 68.220.241.50 port 50554 Jan 23 19:01:38.298127 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Jan 23 19:01:38.304447 systemd[1]: sshd@10-172.238.168.154:22-68.220.241.50:50554.service: Deactivated successfully. Jan 23 19:01:38.309486 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 19:01:38.314602 systemd-logind[1533]: Session 11 logged out. Waiting for processes to exit. Jan 23 19:01:38.318444 systemd-logind[1533]: Removed session 11. Jan 23 19:01:39.078943 kubelet[2800]: E0123 19:01:39.078773 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:01:40.079999 kubelet[2800]: E0123 19:01:40.079698 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:01:41.087202 containerd[1564]: time="2026-01-23T19:01:41.086686557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:01:41.299311 containerd[1564]: time="2026-01-23T19:01:41.299122898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:01:41.301767 containerd[1564]: time="2026-01-23T19:01:41.301256802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:01:41.302220 containerd[1564]: time="2026-01-23T19:01:41.301975250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:01:41.303700 kubelet[2800]: E0123 19:01:41.302825 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:01:41.303700 kubelet[2800]: E0123 19:01:41.303034 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:01:41.305407 kubelet[2800]: E0123 19:01:41.304852 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-z4wzh_calico-system(6cc25f9b-f232-47b9-8c25-dc08c13b1bb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:01:41.306066 kubelet[2800]: E0123 19:01:41.305985 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:01:43.334258 systemd[1]: Started sshd@11-172.238.168.154:22-68.220.241.50:33334.service - OpenSSH per-connection server daemon (68.220.241.50:33334). Jan 23 19:01:43.514010 sshd[4947]: Accepted publickey for core from 68.220.241.50 port 33334 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:01:43.516075 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:01:43.527438 systemd-logind[1533]: New session 12 of user core. Jan 23 19:01:43.536088 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 19:01:43.743517 sshd[4950]: Connection closed by 68.220.241.50 port 33334 Jan 23 19:01:43.745265 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Jan 23 19:01:43.753340 systemd[1]: sshd@11-172.238.168.154:22-68.220.241.50:33334.service: Deactivated successfully. Jan 23 19:01:43.758801 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 19:01:43.766790 systemd-logind[1533]: Session 12 logged out. Waiting for processes to exit. Jan 23 19:01:43.770123 systemd-logind[1533]: Removed session 12. Jan 23 19:01:43.790121 systemd[1]: Started sshd@12-172.238.168.154:22-68.220.241.50:33346.service - OpenSSH per-connection server daemon (68.220.241.50:33346). Jan 23 19:01:43.997121 sshd[4962]: Accepted publickey for core from 68.220.241.50 port 33346 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:01:43.999264 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:01:44.006213 systemd-logind[1533]: New session 13 of user core. Jan 23 19:01:44.013625 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 19:01:44.079890 kubelet[2800]: E0123 19:01:44.079686 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:01:44.082797 containerd[1564]: time="2026-01-23T19:01:44.082322875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:01:44.234913 containerd[1564]: time="2026-01-23T19:01:44.234860073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:01:44.237752 containerd[1564]: time="2026-01-23T19:01:44.237562642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:01:44.238940 containerd[1564]: time="2026-01-23T19:01:44.237926836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:01:44.239093 kubelet[2800]: E0123 19:01:44.239062 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:01:44.239193 kubelet[2800]: E0123 19:01:44.239172 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:01:44.239409 kubelet[2800]: E0123 19:01:44.239316 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:01:44.241397 containerd[1564]: time="2026-01-23T19:01:44.241142852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:01:44.288822 sshd[4965]: Connection closed by 68.220.241.50 port 33346 Jan 23 19:01:44.291307 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Jan 23 19:01:44.300506 systemd[1]: sshd@12-172.238.168.154:22-68.220.241.50:33346.service: Deactivated successfully. Jan 23 19:01:44.305714 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 19:01:44.310036 systemd-logind[1533]: Session 13 logged out. Waiting for processes to exit. Jan 23 19:01:44.327732 systemd[1]: Started sshd@13-172.238.168.154:22-68.220.241.50:33356.service - OpenSSH per-connection server daemon (68.220.241.50:33356). Jan 23 19:01:44.332028 systemd-logind[1533]: Removed session 13. Jan 23 19:01:44.430929 containerd[1564]: time="2026-01-23T19:01:44.430701925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:01:44.431971 containerd[1564]: time="2026-01-23T19:01:44.431943819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:01:44.432142 containerd[1564]: time="2026-01-23T19:01:44.432067950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:01:44.432563 kubelet[2800]: E0123 19:01:44.432488 2800 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:01:44.432738 kubelet[2800]: E0123 19:01:44.432690 2800 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:01:44.433364 kubelet[2800]: E0123 19:01:44.433110 2800 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7pxmr_calico-system(04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:01:44.433364 kubelet[2800]: E0123 19:01:44.433213 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:01:44.524147 sshd[4975]: Accepted publickey for core from 68.220.241.50 port 33356 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:01:44.525613 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:01:44.532192 systemd-logind[1533]: New session 14 of user core. Jan 23 19:01:44.539214 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 19:01:44.799211 sshd[4978]: Connection closed by 68.220.241.50 port 33356 Jan 23 19:01:44.800208 sshd-session[4975]: pam_unix(sshd:session): session closed for user core Jan 23 19:01:44.806571 systemd[1]: sshd@13-172.238.168.154:22-68.220.241.50:33356.service: Deactivated successfully. Jan 23 19:01:44.810683 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 19:01:44.814077 systemd-logind[1533]: Session 14 logged out. Waiting for processes to exit. Jan 23 19:01:44.817565 systemd-logind[1533]: Removed session 14. Jan 23 19:01:45.728212 update_engine[1536]: I20260123 19:01:45.728018 1536 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:01:45.728812 update_engine[1536]: I20260123 19:01:45.728426 1536 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:01:45.729361 update_engine[1536]: I20260123 19:01:45.729324 1536 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:01:45.730554 update_engine[1536]: E20260123 19:01:45.730453 1536 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:01:45.730706 update_engine[1536]: I20260123 19:01:45.730615 1536 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 19:01:47.081057 kubelet[2800]: E0123 19:01:47.080969 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:01:48.077984 kubelet[2800]: E0123 19:01:48.077487 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:01:48.079864 kubelet[2800]: E0123 19:01:48.079731 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:01:48.082556 kubelet[2800]: E0123 19:01:48.082395 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:01:49.834359 systemd[1]: Started sshd@14-172.238.168.154:22-68.220.241.50:33362.service - OpenSSH per-connection server daemon (68.220.241.50:33362). Jan 23 19:01:50.001044 sshd[4995]: Accepted publickey for core from 68.220.241.50 port 33362 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:01:50.003325 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:01:50.009983 systemd-logind[1533]: New session 15 of user core. Jan 23 19:01:50.016577 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 19:01:50.233689 sshd[4998]: Connection closed by 68.220.241.50 port 33362 Jan 23 19:01:50.235171 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Jan 23 19:01:50.243574 systemd[1]: sshd@14-172.238.168.154:22-68.220.241.50:33362.service: Deactivated successfully. Jan 23 19:01:50.249078 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 19:01:50.252349 systemd-logind[1533]: Session 15 logged out. Waiting for processes to exit. Jan 23 19:01:50.254414 systemd-logind[1533]: Removed session 15. Jan 23 19:01:55.270728 systemd[1]: Started sshd@15-172.238.168.154:22-68.220.241.50:50154.service - OpenSSH per-connection server daemon (68.220.241.50:50154). Jan 23 19:01:55.456612 sshd[5010]: Accepted publickey for core from 68.220.241.50 port 50154 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:01:55.458179 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:01:55.467561 systemd-logind[1533]: New session 16 of user core. Jan 23 19:01:55.473073 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 19:01:55.680036 sshd[5013]: Connection closed by 68.220.241.50 port 50154 Jan 23 19:01:55.681188 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Jan 23 19:01:55.688253 systemd[1]: sshd@15-172.238.168.154:22-68.220.241.50:50154.service: Deactivated successfully. Jan 23 19:01:55.693662 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 19:01:55.696520 systemd-logind[1533]: Session 16 logged out. Waiting for processes to exit. Jan 23 19:01:55.701360 systemd-logind[1533]: Removed session 16. Jan 23 19:01:55.729461 update_engine[1536]: I20260123 19:01:55.726945 1536 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:01:55.729461 update_engine[1536]: I20260123 19:01:55.727052 1536 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:01:55.729461 update_engine[1536]: I20260123 19:01:55.727547 1536 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:01:55.730515 update_engine[1536]: E20260123 19:01:55.730485 1536 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:01:55.730657 update_engine[1536]: I20260123 19:01:55.730633 1536 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 19:01:56.082459 kubelet[2800]: E0123 19:01:56.081310 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:01:56.084925 kubelet[2800]: E0123 19:01:56.084127 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:01:57.078011 kubelet[2800]: E0123 19:01:57.077647 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:01:57.081638 kubelet[2800]: E0123 19:01:57.081573 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:01:59.082611 kubelet[2800]: E0123 19:01:59.082493 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:02:00.078070 kubelet[2800]: E0123 19:02:00.078004 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:02:00.720774 systemd[1]: Started sshd@16-172.238.168.154:22-68.220.241.50:50166.service - OpenSSH per-connection server daemon (68.220.241.50:50166). Jan 23 19:02:00.919161 sshd[5025]: Accepted publickey for core from 68.220.241.50 port 50166 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:02:00.920869 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:02:00.928284 systemd-logind[1533]: New session 17 of user core. Jan 23 19:02:00.934259 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:02:01.081886 kubelet[2800]: E0123 19:02:01.081290 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab" Jan 23 19:02:01.179190 sshd[5028]: Connection closed by 68.220.241.50 port 50166 Jan 23 19:02:01.180164 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Jan 23 19:02:01.185946 systemd[1]: sshd@16-172.238.168.154:22-68.220.241.50:50166.service: Deactivated successfully. Jan 23 19:02:01.189380 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:02:01.191338 systemd-logind[1533]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:02:01.192503 systemd-logind[1533]: Removed session 17. Jan 23 19:02:01.218143 systemd[1]: Started sshd@17-172.238.168.154:22-68.220.241.50:50182.service - OpenSSH per-connection server daemon (68.220.241.50:50182). Jan 23 19:02:01.397175 sshd[5040]: Accepted publickey for core from 68.220.241.50 port 50182 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:02:01.402616 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:02:01.411954 systemd-logind[1533]: New session 18 of user core. Jan 23 19:02:01.415489 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:02:01.807606 sshd[5043]: Connection closed by 68.220.241.50 port 50182 Jan 23 19:02:01.809226 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Jan 23 19:02:01.818387 systemd-logind[1533]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:02:01.819810 systemd[1]: sshd@17-172.238.168.154:22-68.220.241.50:50182.service: Deactivated successfully. Jan 23 19:02:01.824269 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:02:01.829233 systemd-logind[1533]: Removed session 18. Jan 23 19:02:01.843155 systemd[1]: Started sshd@18-172.238.168.154:22-68.220.241.50:50192.service - OpenSSH per-connection server daemon (68.220.241.50:50192). Jan 23 19:02:02.027011 sshd[5053]: Accepted publickey for core from 68.220.241.50 port 50192 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:02:02.029197 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:02:02.037110 systemd-logind[1533]: New session 19 of user core. Jan 23 19:02:02.046092 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:02:02.889931 sshd[5056]: Connection closed by 68.220.241.50 port 50192 Jan 23 19:02:02.891207 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Jan 23 19:02:02.899200 systemd[1]: sshd@18-172.238.168.154:22-68.220.241.50:50192.service: Deactivated successfully. Jan 23 19:02:02.907012 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:02:02.910652 systemd-logind[1533]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:02:02.931306 systemd[1]: Started sshd@19-172.238.168.154:22-68.220.241.50:41402.service - OpenSSH per-connection server daemon (68.220.241.50:41402). Jan 23 19:02:02.933592 systemd-logind[1533]: Removed session 19. Jan 23 19:02:03.107432 sshd[5076]: Accepted publickey for core from 68.220.241.50 port 41402 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:02:03.109506 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:02:03.118680 systemd-logind[1533]: New session 20 of user core. Jan 23 19:02:03.125068 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:02:03.452430 sshd[5079]: Connection closed by 68.220.241.50 port 41402 Jan 23 19:02:03.453208 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Jan 23 19:02:03.462580 systemd[1]: sshd@19-172.238.168.154:22-68.220.241.50:41402.service: Deactivated successfully. Jan 23 19:02:03.467309 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:02:03.468558 systemd-logind[1533]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:02:03.470680 systemd-logind[1533]: Removed session 20. Jan 23 19:02:03.489568 systemd[1]: Started sshd@20-172.238.168.154:22-68.220.241.50:41404.service - OpenSSH per-connection server daemon (68.220.241.50:41404). Jan 23 19:02:03.673702 sshd[5089]: Accepted publickey for core from 68.220.241.50 port 41404 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:02:03.678840 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:02:03.689790 systemd-logind[1533]: New session 21 of user core. Jan 23 19:02:03.694072 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 19:02:03.920991 sshd[5092]: Connection closed by 68.220.241.50 port 41404 Jan 23 19:02:03.923535 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Jan 23 19:02:03.933212 systemd[1]: sshd@20-172.238.168.154:22-68.220.241.50:41404.service: Deactivated successfully. Jan 23 19:02:03.938538 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 19:02:03.943780 systemd-logind[1533]: Session 21 logged out. Waiting for processes to exit. Jan 23 19:02:03.946184 systemd-logind[1533]: Removed session 21. Jan 23 19:02:04.077425 kubelet[2800]: E0123 19:02:04.077385 2800 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Jan 23 19:02:05.728513 update_engine[1536]: I20260123 19:02:05.727946 1536 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:02:05.728513 update_engine[1536]: I20260123 19:02:05.728044 1536 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:02:05.728513 update_engine[1536]: I20260123 19:02:05.728463 1536 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:02:05.730932 update_engine[1536]: E20260123 19:02:05.730513 1536 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:02:05.730932 update_engine[1536]: I20260123 19:02:05.730561 1536 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 19:02:05.730932 update_engine[1536]: I20260123 19:02:05.730584 1536 omaha_request_action.cc:617] Omaha request response: Jan 23 19:02:05.730932 update_engine[1536]: E20260123 19:02:05.730710 1536 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 19:02:05.731226 update_engine[1536]: I20260123 19:02:05.731202 1536 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 19:02:05.731277 update_engine[1536]: I20260123 19:02:05.731262 1536 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:02:05.731333 update_engine[1536]: I20260123 19:02:05.731313 1536 update_attempter.cc:306] Processing Done. Jan 23 19:02:05.731441 update_engine[1536]: E20260123 19:02:05.731423 1536 update_attempter.cc:619] Update failed. Jan 23 19:02:05.731503 update_engine[1536]: I20260123 19:02:05.731487 1536 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 19:02:05.731554 update_engine[1536]: I20260123 19:02:05.731539 1536 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 19:02:05.731607 update_engine[1536]: I20260123 19:02:05.731591 1536 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 19:02:05.732934 update_engine[1536]: I20260123 19:02:05.731753 1536 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 19:02:05.732934 update_engine[1536]: I20260123 19:02:05.731819 1536 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 19:02:05.732934 update_engine[1536]: I20260123 19:02:05.731828 1536 omaha_request_action.cc:272] Request: Jan 23 19:02:05.732934 update_engine[1536]: Jan 23 19:02:05.732934 update_engine[1536]: Jan 23 19:02:05.732934 update_engine[1536]: Jan 23 19:02:05.732934 update_engine[1536]: Jan 23 19:02:05.732934 update_engine[1536]: Jan 23 19:02:05.732934 update_engine[1536]: Jan 23 19:02:05.732934 update_engine[1536]: I20260123 19:02:05.731837 1536 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 19:02:05.732934 update_engine[1536]: I20260123 19:02:05.731863 1536 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 19:02:05.734478 update_engine[1536]: I20260123 19:02:05.734426 1536 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 19:02:05.735207 update_engine[1536]: E20260123 19:02:05.735178 1536 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 19:02:05.735325 update_engine[1536]: I20260123 19:02:05.735285 1536 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 19:02:05.735968 update_engine[1536]: I20260123 19:02:05.735945 1536 omaha_request_action.cc:617] Omaha request response: Jan 23 19:02:05.736036 update_engine[1536]: I20260123 19:02:05.736019 1536 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:02:05.736080 update_engine[1536]: I20260123 19:02:05.736066 1536 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 19:02:05.736267 update_engine[1536]: I20260123 19:02:05.736253 1536 update_attempter.cc:306] Processing Done. Jan 23 19:02:05.736310 update_engine[1536]: I20260123 19:02:05.736296 1536 update_attempter.cc:310] Error event sent. Jan 23 19:02:05.736382 update_engine[1536]: I20260123 19:02:05.736346 1536 update_check_scheduler.cc:74] Next update check in 41m35s Jan 23 19:02:05.736691 locksmithd[1585]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 19:02:05.738144 locksmithd[1585]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 19:02:08.961188 systemd[1]: Started sshd@21-172.238.168.154:22-68.220.241.50:41406.service - OpenSSH per-connection server daemon (68.220.241.50:41406). Jan 23 19:02:09.084273 kubelet[2800]: E0123 19:02:09.084187 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7pxmr" podUID="04ac36e7-bbd5-42c4-814d-f8a86ddd8bdd" Jan 23 19:02:09.163994 sshd[5133]: Accepted publickey for core from 68.220.241.50 port 41406 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:02:09.168293 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:02:09.176792 systemd-logind[1533]: New session 22 of user core. Jan 23 19:02:09.182040 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 19:02:09.418934 sshd[5136]: Connection closed by 68.220.241.50 port 41406 Jan 23 19:02:09.419830 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Jan 23 19:02:09.429744 systemd-logind[1533]: Session 22 logged out. Waiting for processes to exit. Jan 23 19:02:09.430311 systemd[1]: sshd@21-172.238.168.154:22-68.220.241.50:41406.service: Deactivated successfully. Jan 23 19:02:09.436399 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 19:02:09.441517 systemd-logind[1533]: Removed session 22. Jan 23 19:02:11.080912 kubelet[2800]: E0123 19:02:11.079996 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-z4wzh" podUID="6cc25f9b-f232-47b9-8c25-dc08c13b1bb7" Jan 23 19:02:11.084374 kubelet[2800]: E0123 19:02:11.083981 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-czmzf" podUID="53afd191-0189-457d-b022-3c4e010c308d" Jan 23 19:02:12.078564 kubelet[2800]: E0123 19:02:12.078422 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76c4546688-v7z24" podUID="27b7f510-5fb2-464f-a554-4a5af21f95ed" Jan 23 19:02:14.463826 systemd[1]: Started sshd@22-172.238.168.154:22-68.220.241.50:48722.service - OpenSSH per-connection server daemon (68.220.241.50:48722). Jan 23 19:02:14.651587 sshd[5150]: Accepted publickey for core from 68.220.241.50 port 48722 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 19:02:14.653673 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:02:14.663635 systemd-logind[1533]: New session 23 of user core. Jan 23 19:02:14.672098 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 19:02:14.892845 sshd[5153]: Connection closed by 68.220.241.50 port 48722 Jan 23 19:02:14.894495 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Jan 23 19:02:14.900079 systemd[1]: sshd@22-172.238.168.154:22-68.220.241.50:48722.service: Deactivated successfully. Jan 23 19:02:14.904577 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 19:02:14.906094 systemd-logind[1533]: Session 23 logged out. Waiting for processes to exit. Jan 23 19:02:14.908111 systemd-logind[1533]: Removed session 23. Jan 23 19:02:15.079158 kubelet[2800]: E0123 19:02:15.079098 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c474f47cb-gsdmr" podUID="9b7d7d7b-9f3b-4806-a4e8-308622bc18c5" Jan 23 19:02:16.079920 kubelet[2800]: E0123 19:02:16.079464 2800 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cf455955c-jr65c" podUID="26eb3ddf-a640-4f9f-b12e-8ab0e31fdfab"