Nov 4 05:04:26.322179 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 03:00:51 -00 2025 Nov 4 05:04:26.322211 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 05:04:26.322227 kernel: BIOS-provided physical RAM map: Nov 4 05:04:26.322237 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 05:04:26.322246 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 05:04:26.322256 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 05:04:26.322267 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 4 05:04:26.322277 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 4 05:04:26.322290 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 4 05:04:26.322300 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 4 05:04:26.322312 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 05:04:26.322322 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 05:04:26.322331 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 05:04:26.322341 kernel: NX (Execute Disable) protection: active Nov 4 05:04:26.322353 kernel: APIC: Static calls initialized Nov 4 05:04:26.322366 kernel: SMBIOS 2.8 present. Nov 4 05:04:26.322380 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 4 05:04:26.322389 kernel: DMI: Memory slots populated: 1/1 Nov 4 05:04:26.322398 kernel: Hypervisor detected: KVM Nov 4 05:04:26.322407 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 05:04:26.322417 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 05:04:26.322428 kernel: kvm-clock: using sched offset of 4334120335 cycles Nov 4 05:04:26.322439 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 05:04:26.322450 kernel: tsc: Detected 2794.750 MHz processor Nov 4 05:04:26.322465 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 05:04:26.322477 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 05:04:26.322487 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 4 05:04:26.322497 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 05:04:26.322508 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 05:04:26.322519 kernel: Using GB pages for direct mapping Nov 4 05:04:26.322538 kernel: ACPI: Early table checksum verification disabled Nov 4 05:04:26.322553 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 4 05:04:26.322564 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:04:26.322574 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:04:26.322585 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:04:26.322595 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 4 05:04:26.322606 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:04:26.322617 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:04:26.322631 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:04:26.322642 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 05:04:26.322658 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 4 05:04:26.322669 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 4 05:04:26.322680 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 4 05:04:26.322694 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 4 05:04:26.322705 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 4 05:04:26.322716 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 4 05:04:26.322728 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 4 05:04:26.322738 kernel: No NUMA configuration found Nov 4 05:04:26.322749 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 4 05:04:26.322761 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 4 05:04:26.322775 kernel: Zone ranges: Nov 4 05:04:26.322786 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 05:04:26.322797 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 4 05:04:26.322808 kernel: Normal empty Nov 4 05:04:26.322819 kernel: Device empty Nov 4 05:04:26.322830 kernel: Movable zone start for each node Nov 4 05:04:26.322841 kernel: Early memory node ranges Nov 4 05:04:26.322855 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 05:04:26.322866 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 4 05:04:26.322877 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 4 05:04:26.322888 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 05:04:26.322923 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 05:04:26.322935 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 4 05:04:26.322952 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 05:04:26.322965 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 05:04:26.322984 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 05:04:26.322998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 05:04:26.323015 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 05:04:26.323029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 05:04:26.323043 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 05:04:26.323054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 05:04:26.323065 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 05:04:26.323081 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 05:04:26.323092 kernel: TSC deadline timer available Nov 4 05:04:26.323103 kernel: CPU topo: Max. logical packages: 1 Nov 4 05:04:26.323113 kernel: CPU topo: Max. logical dies: 1 Nov 4 05:04:26.323124 kernel: CPU topo: Max. dies per package: 1 Nov 4 05:04:26.323134 kernel: CPU topo: Max. threads per core: 1 Nov 4 05:04:26.323144 kernel: CPU topo: Num. cores per package: 4 Nov 4 05:04:26.323155 kernel: CPU topo: Num. threads per package: 4 Nov 4 05:04:26.323170 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 4 05:04:26.323181 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 05:04:26.323191 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 05:04:26.323202 kernel: kvm-guest: setup PV sched yield Nov 4 05:04:26.323213 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 4 05:04:26.323224 kernel: Booting paravirtualized kernel on KVM Nov 4 05:04:26.323235 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 05:04:26.323249 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 4 05:04:26.323260 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 4 05:04:26.323271 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 4 05:04:26.323282 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 4 05:04:26.323292 kernel: kvm-guest: PV spinlocks enabled Nov 4 05:04:26.323303 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 05:04:26.323315 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 05:04:26.323329 kernel: random: crng init done Nov 4 05:04:26.323340 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 05:04:26.323350 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 05:04:26.323361 kernel: Fallback order for Node 0: 0 Nov 4 05:04:26.323372 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 4 05:04:26.323383 kernel: Policy zone: DMA32 Nov 4 05:04:26.323394 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 05:04:26.323408 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 05:04:26.323420 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 05:04:26.323430 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 05:04:26.323442 kernel: Dynamic Preempt: voluntary Nov 4 05:04:26.323453 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 05:04:26.323465 kernel: rcu: RCU event tracing is enabled. Nov 4 05:04:26.323476 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 05:04:26.323490 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 05:04:26.323507 kernel: Rude variant of Tasks RCU enabled. Nov 4 05:04:26.323518 kernel: Tracing variant of Tasks RCU enabled. Nov 4 05:04:26.323539 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 05:04:26.323585 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 05:04:26.323595 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 05:04:26.323604 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 05:04:26.323612 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 05:04:26.323625 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 4 05:04:26.323634 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 05:04:26.323649 kernel: Console: colour VGA+ 80x25 Nov 4 05:04:26.323660 kernel: printk: legacy console [ttyS0] enabled Nov 4 05:04:26.323668 kernel: ACPI: Core revision 20240827 Nov 4 05:04:26.323677 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 05:04:26.323685 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 05:04:26.323693 kernel: x2apic enabled Nov 4 05:04:26.323702 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 05:04:26.323728 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 05:04:26.323737 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 05:04:26.323745 kernel: kvm-guest: setup PV IPIs Nov 4 05:04:26.323754 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 05:04:26.323765 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 05:04:26.323773 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 4 05:04:26.323781 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 05:04:26.323790 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 05:04:26.323798 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 05:04:26.323806 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 05:04:26.323815 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 05:04:26.323825 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 05:04:26.323834 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 4 05:04:26.323843 kernel: active return thunk: retbleed_return_thunk Nov 4 05:04:26.323854 kernel: RETBleed: Mitigation: untrained return thunk Nov 4 05:04:26.323866 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 05:04:26.323877 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 05:04:26.323888 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 05:04:26.323923 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 05:04:26.323935 kernel: active return thunk: srso_return_thunk Nov 4 05:04:26.324333 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 05:04:26.324343 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 05:04:26.324351 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 05:04:26.324360 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 05:04:26.324368 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 05:04:26.324381 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 05:04:26.324390 kernel: Freeing SMP alternatives memory: 32K Nov 4 05:04:26.324398 kernel: pid_max: default: 32768 minimum: 301 Nov 4 05:04:26.324407 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 05:04:26.324415 kernel: landlock: Up and running. Nov 4 05:04:26.324423 kernel: SELinux: Initializing. Nov 4 05:04:26.324436 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 05:04:26.324447 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 05:04:26.324456 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 4 05:04:26.324465 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 05:04:26.324473 kernel: ... version: 0 Nov 4 05:04:26.324482 kernel: ... bit width: 48 Nov 4 05:04:26.324490 kernel: ... generic registers: 6 Nov 4 05:04:26.324498 kernel: ... value mask: 0000ffffffffffff Nov 4 05:04:26.324509 kernel: ... max period: 00007fffffffffff Nov 4 05:04:26.324517 kernel: ... fixed-purpose events: 0 Nov 4 05:04:26.324525 kernel: ... event mask: 000000000000003f Nov 4 05:04:26.324544 kernel: signal: max sigframe size: 1776 Nov 4 05:04:26.324552 kernel: rcu: Hierarchical SRCU implementation. Nov 4 05:04:26.324561 kernel: rcu: Max phase no-delay instances is 400. Nov 4 05:04:26.324570 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 05:04:26.324581 kernel: smp: Bringing up secondary CPUs ... Nov 4 05:04:26.324590 kernel: smpboot: x86: Booting SMP configuration: Nov 4 05:04:26.324599 kernel: .... node #0, CPUs: #1 #2 #3 Nov 4 05:04:26.324607 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 05:04:26.324615 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 4 05:04:26.324624 kernel: Memory: 2447340K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15360K init, 2684K bss, 118472K reserved, 0K cma-reserved) Nov 4 05:04:26.324633 kernel: devtmpfs: initialized Nov 4 05:04:26.324643 kernel: x86/mm: Memory block size: 128MB Nov 4 05:04:26.324652 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 05:04:26.324660 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 05:04:26.324669 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 05:04:26.324677 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 05:04:26.324686 kernel: audit: initializing netlink subsys (disabled) Nov 4 05:04:26.324694 kernel: audit: type=2000 audit(1762232663.138:1): state=initialized audit_enabled=0 res=1 Nov 4 05:04:26.324705 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 05:04:26.324713 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 05:04:26.324721 kernel: cpuidle: using governor menu Nov 4 05:04:26.324730 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 05:04:26.324738 kernel: dca service started, version 1.12.1 Nov 4 05:04:26.324747 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 4 05:04:26.324755 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 4 05:04:26.324766 kernel: PCI: Using configuration type 1 for base access Nov 4 05:04:26.324774 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 05:04:26.324783 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 05:04:26.324791 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 05:04:26.324800 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 05:04:26.324808 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 05:04:26.324816 kernel: ACPI: Added _OSI(Module Device) Nov 4 05:04:26.324825 kernel: ACPI: Added _OSI(Processor Device) Nov 4 05:04:26.324835 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 05:04:26.324844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 05:04:26.324855 kernel: ACPI: Interpreter enabled Nov 4 05:04:26.324863 kernel: ACPI: PM: (supports S0 S3 S5) Nov 4 05:04:26.324872 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 05:04:26.324880 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 05:04:26.324888 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 05:04:26.324923 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 05:04:26.324932 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 05:04:26.325209 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 05:04:26.325413 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 05:04:26.325607 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 05:04:26.325624 kernel: PCI host bridge to bus 0000:00 Nov 4 05:04:26.325809 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 05:04:26.326030 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 05:04:26.326198 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 05:04:26.326357 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 4 05:04:26.326516 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 05:04:26.326691 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 4 05:04:26.326851 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 05:04:26.327070 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 05:04:26.327256 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 05:04:26.327436 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 4 05:04:26.327655 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 4 05:04:26.327857 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 4 05:04:26.328100 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 05:04:26.328328 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 05:04:26.328523 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 4 05:04:26.328722 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 4 05:04:26.328929 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 4 05:04:26.329151 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 4 05:04:26.329339 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 4 05:04:26.329520 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 4 05:04:26.329715 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 4 05:04:26.329965 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 05:04:26.330213 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 4 05:04:26.330397 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 4 05:04:26.330594 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 4 05:04:26.330784 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 4 05:04:26.330993 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 05:04:26.331171 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 05:04:26.331367 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 05:04:26.331577 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 4 05:04:26.331807 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 4 05:04:26.332078 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 05:04:26.332261 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 4 05:04:26.332278 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 05:04:26.332287 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 05:04:26.332296 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 05:04:26.332308 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 05:04:26.332318 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 05:04:26.332326 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 05:04:26.332334 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 05:04:26.332345 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 05:04:26.332354 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 05:04:26.332362 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 05:04:26.332371 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 05:04:26.332379 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 05:04:26.332388 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 05:04:26.332396 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 05:04:26.332407 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 05:04:26.332415 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 05:04:26.332424 kernel: iommu: Default domain type: Translated Nov 4 05:04:26.332433 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 05:04:26.332441 kernel: PCI: Using ACPI for IRQ routing Nov 4 05:04:26.332449 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 05:04:26.332458 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 05:04:26.332469 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 4 05:04:26.332655 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 05:04:26.332862 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 05:04:26.333090 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 05:04:26.333105 kernel: vgaarb: loaded Nov 4 05:04:26.333116 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 05:04:26.333130 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 05:04:26.333152 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 05:04:26.333167 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 05:04:26.333181 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 05:04:26.333196 kernel: pnp: PnP ACPI init Nov 4 05:04:26.333444 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 4 05:04:26.333463 kernel: pnp: PnP ACPI: found 6 devices Nov 4 05:04:26.333474 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 05:04:26.333491 kernel: NET: Registered PF_INET protocol family Nov 4 05:04:26.333503 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 05:04:26.333515 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 05:04:26.333537 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 05:04:26.333547 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 05:04:26.333555 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 05:04:26.333567 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 05:04:26.333576 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 05:04:26.333585 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 05:04:26.333594 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 05:04:26.333603 kernel: NET: Registered PF_XDP protocol family Nov 4 05:04:26.333783 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 05:04:26.333977 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 05:04:26.334160 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 05:04:26.334319 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 4 05:04:26.334483 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 4 05:04:26.334654 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 4 05:04:26.334666 kernel: PCI: CLS 0 bytes, default 64 Nov 4 05:04:26.334675 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Nov 4 05:04:26.334683 kernel: Initialise system trusted keyrings Nov 4 05:04:26.334698 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 05:04:26.334710 kernel: Key type asymmetric registered Nov 4 05:04:26.334722 kernel: Asymmetric key parser 'x509' registered Nov 4 05:04:26.334731 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 05:04:26.334739 kernel: io scheduler mq-deadline registered Nov 4 05:04:26.334748 kernel: io scheduler kyber registered Nov 4 05:04:26.334756 kernel: io scheduler bfq registered Nov 4 05:04:26.334767 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 05:04:26.334777 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 05:04:26.334785 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 05:04:26.334794 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 4 05:04:26.334802 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 05:04:26.334811 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 05:04:26.334819 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 05:04:26.334830 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 05:04:26.334839 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 05:04:26.334847 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 05:04:26.335088 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 4 05:04:26.335303 kernel: rtc_cmos 00:04: registered as rtc0 Nov 4 05:04:26.335494 kernel: rtc_cmos 00:04: setting system clock to 2025-11-04T05:04:24 UTC (1762232664) Nov 4 05:04:26.335736 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 4 05:04:26.335756 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 05:04:26.335765 kernel: NET: Registered PF_INET6 protocol family Nov 4 05:04:26.335774 kernel: Segment Routing with IPv6 Nov 4 05:04:26.335782 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 05:04:26.335791 kernel: NET: Registered PF_PACKET protocol family Nov 4 05:04:26.335799 kernel: Key type dns_resolver registered Nov 4 05:04:26.335808 kernel: IPI shorthand broadcast: enabled Nov 4 05:04:26.335819 kernel: sched_clock: Marking stable (1762003556, 204550432)->(2023078452, -56524464) Nov 4 05:04:26.335827 kernel: registered taskstats version 1 Nov 4 05:04:26.335836 kernel: Loading compiled-in X.509 certificates Nov 4 05:04:26.335844 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: dafbe857b8ef9eaad4381fdddb57853ce023547e' Nov 4 05:04:26.335853 kernel: Demotion targets for Node 0: null Nov 4 05:04:26.335861 kernel: Key type .fscrypt registered Nov 4 05:04:26.335870 kernel: Key type fscrypt-provisioning registered Nov 4 05:04:26.335880 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 05:04:26.335889 kernel: ima: Allocated hash algorithm: sha1 Nov 4 05:04:26.335915 kernel: ima: No architecture policies found Nov 4 05:04:26.335923 kernel: clk: Disabling unused clocks Nov 4 05:04:26.335936 kernel: Freeing unused kernel image (initmem) memory: 15360K Nov 4 05:04:26.335949 kernel: Write protecting the kernel read-only data: 45056k Nov 4 05:04:26.335966 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 4 05:04:26.335990 kernel: Run /init as init process Nov 4 05:04:26.336010 kernel: with arguments: Nov 4 05:04:26.336026 kernel: /init Nov 4 05:04:26.336042 kernel: with environment: Nov 4 05:04:26.336061 kernel: HOME=/ Nov 4 05:04:26.336077 kernel: TERM=linux Nov 4 05:04:26.336093 kernel: SCSI subsystem initialized Nov 4 05:04:26.336113 kernel: libata version 3.00 loaded. Nov 4 05:04:26.336555 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 05:04:26.336629 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 05:04:26.337015 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 05:04:26.337262 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 05:04:26.337462 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 05:04:26.337689 kernel: scsi host0: ahci Nov 4 05:04:26.337879 kernel: scsi host1: ahci Nov 4 05:04:26.338118 kernel: scsi host2: ahci Nov 4 05:04:26.338304 kernel: scsi host3: ahci Nov 4 05:04:26.338488 kernel: scsi host4: ahci Nov 4 05:04:26.338689 kernel: scsi host5: ahci Nov 4 05:04:26.338703 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 4 05:04:26.338712 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 4 05:04:26.338721 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 4 05:04:26.338730 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 4 05:04:26.338739 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 4 05:04:26.338748 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 4 05:04:26.338762 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 05:04:26.338770 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 4 05:04:26.338779 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 05:04:26.338788 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 05:04:26.338798 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 05:04:26.338807 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 05:04:26.338816 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 05:04:26.338826 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 4 05:04:26.338835 kernel: ata3.00: applying bridge limits Nov 4 05:04:26.338844 kernel: ata3.00: LPM support broken, forcing max_power Nov 4 05:04:26.338853 kernel: ata3.00: configured for UDMA/100 Nov 4 05:04:26.339093 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 4 05:04:26.339286 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 4 05:04:26.339466 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 05:04:26.339479 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 05:04:26.339488 kernel: GPT:16515071 != 27000831 Nov 4 05:04:26.339497 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 05:04:26.339505 kernel: GPT:16515071 != 27000831 Nov 4 05:04:26.339514 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 05:04:26.339523 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 05:04:26.339730 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 4 05:04:26.339742 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 05:04:26.339993 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 4 05:04:26.340008 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 05:04:26.340018 kernel: device-mapper: uevent: version 1.0.3 Nov 4 05:04:26.340027 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 05:04:26.340036 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 05:04:26.340050 kernel: raid6: avx2x4 gen() 30347 MB/s Nov 4 05:04:26.340059 kernel: raid6: avx2x2 gen() 21729 MB/s Nov 4 05:04:26.340068 kernel: raid6: avx2x1 gen() 20000 MB/s Nov 4 05:04:26.340076 kernel: raid6: using algorithm avx2x4 gen() 30347 MB/s Nov 4 05:04:26.340088 kernel: raid6: .... xor() 4732 MB/s, rmw enabled Nov 4 05:04:26.340096 kernel: raid6: using avx2x2 recovery algorithm Nov 4 05:04:26.340105 kernel: xor: automatically using best checksumming function avx Nov 4 05:04:26.340114 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 05:04:26.340123 kernel: BTRFS: device fsid 6f0a5369-79b6-4a87-b9a6-85ec05be306c devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (181) Nov 4 05:04:26.340134 kernel: BTRFS info (device dm-0): first mount of filesystem 6f0a5369-79b6-4a87-b9a6-85ec05be306c Nov 4 05:04:26.340143 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 05:04:26.340154 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 05:04:26.340163 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 05:04:26.340172 kernel: loop: module loaded Nov 4 05:04:26.340180 kernel: loop0: detected capacity change from 0 to 100136 Nov 4 05:04:26.340189 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 05:04:26.340199 systemd[1]: Successfully made /usr/ read-only. Nov 4 05:04:26.340212 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 05:04:26.340224 systemd[1]: Detected virtualization kvm. Nov 4 05:04:26.340233 systemd[1]: Detected architecture x86-64. Nov 4 05:04:26.340243 systemd[1]: Running in initrd. Nov 4 05:04:26.340252 systemd[1]: No hostname configured, using default hostname. Nov 4 05:04:26.340261 systemd[1]: Hostname set to . Nov 4 05:04:26.340271 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 05:04:26.340282 systemd[1]: Queued start job for default target initrd.target. Nov 4 05:04:26.340292 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 05:04:26.340301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 05:04:26.340311 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 05:04:26.340321 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 05:04:26.340330 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 05:04:26.340343 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 05:04:26.340353 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 05:04:26.340363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 05:04:26.340372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 05:04:26.340382 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 05:04:26.340391 systemd[1]: Reached target paths.target - Path Units. Nov 4 05:04:26.340402 systemd[1]: Reached target slices.target - Slice Units. Nov 4 05:04:26.340412 systemd[1]: Reached target swap.target - Swaps. Nov 4 05:04:26.340421 systemd[1]: Reached target timers.target - Timer Units. Nov 4 05:04:26.340430 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 05:04:26.340442 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 05:04:26.340451 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 05:04:26.340461 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 05:04:26.340472 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 05:04:26.340482 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 05:04:26.340491 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 05:04:26.340500 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 05:04:26.340510 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 05:04:26.340519 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 05:04:26.340538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 05:04:26.340550 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 05:04:26.340560 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 05:04:26.340569 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 05:04:26.340579 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 05:04:26.340589 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 05:04:26.340598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 05:04:26.340610 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 05:04:26.340619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 05:04:26.340629 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 05:04:26.340639 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 05:04:26.340683 systemd-journald[317]: Collecting audit messages is disabled. Nov 4 05:04:26.340706 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 05:04:26.340716 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 05:04:26.340729 systemd-journald[317]: Journal started Nov 4 05:04:26.340750 systemd-journald[317]: Runtime Journal (/run/log/journal/ee7f27ea02e74368b695eeebdec60fa6) is 6M, max 48.2M, 42.2M free. Nov 4 05:04:26.345147 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 05:04:26.348163 systemd-modules-load[320]: Inserted module 'br_netfilter' Nov 4 05:04:26.351600 kernel: Bridge firewalling registered Nov 4 05:04:26.348483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 05:04:26.354282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 05:04:26.355512 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 05:04:26.367010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 05:04:26.376142 systemd-tmpfiles[334]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 05:04:26.383062 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 05:04:26.455326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:04:26.459598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 05:04:26.462178 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 05:04:26.477429 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 05:04:26.480602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 05:04:26.504597 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 05:04:26.514257 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 05:04:26.569590 systemd-resolved[348]: Positive Trust Anchors: Nov 4 05:04:26.569605 systemd-resolved[348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 05:04:26.569610 systemd-resolved[348]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 05:04:26.569642 systemd-resolved[348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 05:04:26.594311 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 05:04:26.608665 systemd-resolved[348]: Defaulting to hostname 'linux'. Nov 4 05:04:26.610658 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 05:04:26.614590 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 05:04:26.713937 kernel: Loading iSCSI transport class v2.0-870. Nov 4 05:04:26.729954 kernel: iscsi: registered transport (tcp) Nov 4 05:04:26.761357 kernel: iscsi: registered transport (qla4xxx) Nov 4 05:04:26.761443 kernel: QLogic iSCSI HBA Driver Nov 4 05:04:26.793572 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 05:04:26.817632 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 05:04:26.825199 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 05:04:26.896356 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 05:04:26.899181 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 05:04:26.902511 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 05:04:26.946242 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 05:04:26.951392 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 05:04:26.994478 systemd-udevd[599]: Using default interface naming scheme 'v257'. Nov 4 05:04:27.010578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 05:04:27.017797 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 05:04:27.054792 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 05:04:27.060094 dracut-pre-trigger[669]: rd.md=0: removing MD RAID activation Nov 4 05:04:27.062152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 05:04:27.096764 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 05:04:27.099855 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 05:04:27.126538 systemd-networkd[709]: lo: Link UP Nov 4 05:04:27.126549 systemd-networkd[709]: lo: Gained carrier Nov 4 05:04:27.127388 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 05:04:27.128877 systemd[1]: Reached target network.target - Network. Nov 4 05:04:27.202570 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 05:04:27.204556 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 05:04:27.266158 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 05:04:27.305243 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 05:04:27.318668 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 05:04:27.330119 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 05:04:27.368073 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 05:04:27.376929 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 05:04:27.384244 disk-uuid[773]: Primary Header is updated. Nov 4 05:04:27.384244 disk-uuid[773]: Secondary Entries is updated. Nov 4 05:04:27.384244 disk-uuid[773]: Secondary Header is updated. Nov 4 05:04:27.387558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 05:04:27.396330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 05:04:27.397606 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:04:27.403737 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 05:04:27.408761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 05:04:27.417944 kernel: AES CTR mode by8 optimization enabled Nov 4 05:04:27.431919 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 05:04:27.431934 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 05:04:27.432530 systemd-networkd[709]: eth0: Link UP Nov 4 05:04:27.442836 systemd-networkd[709]: eth0: Gained carrier Nov 4 05:04:27.442854 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 05:04:27.461974 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 05:04:27.524790 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 05:04:27.546434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:04:27.552127 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 05:04:27.552824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 05:04:27.556821 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 05:04:27.564329 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 05:04:27.603773 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 05:04:27.662565 systemd-resolved[348]: Detected conflict on linux IN A 10.0.0.124 Nov 4 05:04:27.662589 systemd-resolved[348]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Nov 4 05:04:28.481605 disk-uuid[774]: Warning: The kernel is still using the old partition table. Nov 4 05:04:28.481605 disk-uuid[774]: The new table will be used at the next reboot or after you Nov 4 05:04:28.481605 disk-uuid[774]: run partprobe(8) or kpartx(8) Nov 4 05:04:28.481605 disk-uuid[774]: The operation has completed successfully. Nov 4 05:04:28.496013 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 05:04:28.496209 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 05:04:28.499318 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 05:04:28.536764 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Nov 4 05:04:28.536845 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:04:28.537030 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 05:04:28.541652 kernel: BTRFS info (device vda6): turning on async discard Nov 4 05:04:28.541681 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 05:04:28.549917 kernel: BTRFS info (device vda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:04:28.551079 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 05:04:28.554617 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 05:04:28.685494 ignition[885]: Ignition 2.22.0 Nov 4 05:04:28.685507 ignition[885]: Stage: fetch-offline Nov 4 05:04:28.685548 ignition[885]: no configs at "/usr/lib/ignition/base.d" Nov 4 05:04:28.685559 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 05:04:28.685648 ignition[885]: parsed url from cmdline: "" Nov 4 05:04:28.685652 ignition[885]: no config URL provided Nov 4 05:04:28.685657 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 05:04:28.685669 ignition[885]: no config at "/usr/lib/ignition/user.ign" Nov 4 05:04:28.685712 ignition[885]: op(1): [started] loading QEMU firmware config module Nov 4 05:04:28.685717 ignition[885]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 05:04:28.703935 ignition[885]: op(1): [finished] loading QEMU firmware config module Nov 4 05:04:28.703970 ignition[885]: QEMU firmware config was not found. Ignoring... Nov 4 05:04:28.775213 systemd-networkd[709]: eth0: Gained IPv6LL Nov 4 05:04:28.787024 ignition[885]: parsing config with SHA512: 29206d2727cc3dcc86ef5d1e3a6190d0196560d67134e25066a4146e2c0ed4912a0070a48049fbfe9aed41ef32d6eb2b1513a84a1ceb48cfb9ad92b3fe6b6a84 Nov 4 05:04:28.793545 unknown[885]: fetched base config from "system" Nov 4 05:04:28.793573 unknown[885]: fetched user config from "qemu" Nov 4 05:04:28.794741 ignition[885]: fetch-offline: fetch-offline passed Nov 4 05:04:28.794840 ignition[885]: Ignition finished successfully Nov 4 05:04:28.802182 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 05:04:28.804407 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 05:04:28.805414 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 05:04:28.847231 ignition[895]: Ignition 2.22.0 Nov 4 05:04:28.847245 ignition[895]: Stage: kargs Nov 4 05:04:28.847404 ignition[895]: no configs at "/usr/lib/ignition/base.d" Nov 4 05:04:28.847417 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 05:04:28.848516 ignition[895]: kargs: kargs passed Nov 4 05:04:28.848585 ignition[895]: Ignition finished successfully Nov 4 05:04:28.854253 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 05:04:28.856538 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 05:04:28.890501 ignition[903]: Ignition 2.22.0 Nov 4 05:04:28.890513 ignition[903]: Stage: disks Nov 4 05:04:28.890646 ignition[903]: no configs at "/usr/lib/ignition/base.d" Nov 4 05:04:28.890655 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 05:04:28.892419 ignition[903]: disks: disks passed Nov 4 05:04:28.895657 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 05:04:28.892469 ignition[903]: Ignition finished successfully Nov 4 05:04:28.896834 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 05:04:28.899956 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 05:04:28.903627 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 05:04:28.904530 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 05:04:28.910557 systemd[1]: Reached target basic.target - Basic System. Nov 4 05:04:28.916834 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 05:04:28.960371 systemd-fsck[913]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 05:04:28.968498 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 05:04:28.973816 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 05:04:29.092958 kernel: EXT4-fs (vda9): mounted filesystem c35327fb-3cdd-496e-85aa-9e1b4133507f r/w with ordered data mode. Quota mode: none. Nov 4 05:04:29.093848 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 05:04:29.095258 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 05:04:29.100240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 05:04:29.102151 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 05:04:29.103974 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 05:04:29.104011 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 05:04:29.104036 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 05:04:29.128978 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 05:04:29.131583 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 05:04:29.141021 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Nov 4 05:04:29.141074 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:04:29.142923 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 05:04:29.148507 kernel: BTRFS info (device vda6): turning on async discard Nov 4 05:04:29.148551 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 05:04:29.149999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 05:04:29.192503 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 05:04:29.198578 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Nov 4 05:04:29.204437 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 05:04:29.209659 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 05:04:29.302403 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 05:04:29.305961 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 05:04:29.308637 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 05:04:29.331972 kernel: BTRFS info (device vda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:04:29.347061 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 05:04:29.364799 ignition[1035]: INFO : Ignition 2.22.0 Nov 4 05:04:29.364799 ignition[1035]: INFO : Stage: mount Nov 4 05:04:29.377441 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 05:04:29.377441 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 05:04:29.377441 ignition[1035]: INFO : mount: mount passed Nov 4 05:04:29.377441 ignition[1035]: INFO : Ignition finished successfully Nov 4 05:04:29.385403 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 05:04:29.387806 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 05:04:29.525375 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 05:04:29.527335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 05:04:29.560431 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1047) Nov 4 05:04:29.560495 kernel: BTRFS info (device vda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 05:04:29.560509 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 05:04:29.565930 kernel: BTRFS info (device vda6): turning on async discard Nov 4 05:04:29.565975 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 05:04:29.567643 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 05:04:29.601310 ignition[1064]: INFO : Ignition 2.22.0 Nov 4 05:04:29.601310 ignition[1064]: INFO : Stage: files Nov 4 05:04:29.603918 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 05:04:29.603918 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 05:04:29.603918 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Nov 4 05:04:29.610008 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 05:04:29.610008 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 05:04:29.617800 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 05:04:29.620150 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 05:04:29.622572 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 05:04:29.622572 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 05:04:29.622572 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 05:04:29.620640 unknown[1064]: wrote ssh authorized keys file for user: core Nov 4 05:04:29.673912 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 05:04:29.747803 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 05:04:29.747803 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 05:04:29.805930 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 05:04:29.805930 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 05:04:29.805930 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 05:04:29.805930 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 05:04:29.805930 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 05:04:29.805930 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 05:04:29.805930 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 05:04:29.834546 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 05:04:29.834546 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 05:04:29.834546 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 05:04:29.834546 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 05:04:29.834546 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 05:04:29.834546 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 4 05:04:30.132133 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 05:04:30.537821 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 4 05:04:30.537821 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 05:04:30.545982 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 05:04:30.545982 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 05:04:30.545982 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 05:04:30.545982 ignition[1064]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 4 05:04:30.545982 ignition[1064]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 05:04:30.545982 ignition[1064]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 05:04:30.545982 ignition[1064]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 4 05:04:30.545982 ignition[1064]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 05:04:30.583917 ignition[1064]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 05:04:30.590031 ignition[1064]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 05:04:30.593297 ignition[1064]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 05:04:30.593297 ignition[1064]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 4 05:04:30.593297 ignition[1064]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 05:04:30.593297 ignition[1064]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 05:04:30.593297 ignition[1064]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 05:04:30.593297 ignition[1064]: INFO : files: files passed Nov 4 05:04:30.593297 ignition[1064]: INFO : Ignition finished successfully Nov 4 05:04:30.599887 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 05:04:30.604731 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 05:04:30.626860 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 05:04:30.629988 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 05:04:30.630118 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 05:04:30.644313 initrd-setup-root-after-ignition[1095]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 05:04:30.649669 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 05:04:30.649669 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 05:04:30.654875 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 05:04:30.659244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 05:04:30.660465 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 05:04:30.664983 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 05:04:30.735852 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 05:04:30.736050 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 05:04:30.740335 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 05:04:30.741324 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 05:04:30.749256 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 05:04:30.750567 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 05:04:30.790617 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 05:04:30.793287 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 05:04:30.819945 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 05:04:30.820151 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 05:04:30.821362 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 05:04:30.826373 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 05:04:30.826970 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 05:04:30.827096 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 05:04:30.836235 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 05:04:30.837402 systemd[1]: Stopped target basic.target - Basic System. Nov 4 05:04:30.841949 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 05:04:30.844811 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 05:04:30.851360 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 05:04:30.852493 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 05:04:30.855826 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 05:04:30.859426 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 05:04:30.862686 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 05:04:30.866695 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 05:04:30.867546 systemd[1]: Stopped target swap.target - Swaps. Nov 4 05:04:30.873358 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 05:04:30.873516 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 05:04:30.879494 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 05:04:30.882954 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 05:04:30.883828 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 05:04:30.883968 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 05:04:30.889001 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 05:04:30.889140 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 05:04:30.895805 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 05:04:30.896052 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 05:04:30.896916 systemd[1]: Stopped target paths.target - Path Units. Nov 4 05:04:30.902325 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 05:04:30.910243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 05:04:30.915419 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 05:04:30.918734 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 05:04:30.919650 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 05:04:30.919826 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 05:04:30.927055 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 05:04:30.927207 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 05:04:30.928566 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 05:04:30.928741 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 05:04:30.932334 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 05:04:30.932458 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 05:04:30.941122 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 05:04:30.942683 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 05:04:30.948789 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 05:04:30.949078 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 05:04:30.950010 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 05:04:30.950146 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 05:04:30.956969 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 05:04:30.957157 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 05:04:30.969885 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 05:04:30.970072 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 05:04:30.998943 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 05:04:31.001327 ignition[1123]: INFO : Ignition 2.22.0 Nov 4 05:04:31.001327 ignition[1123]: INFO : Stage: umount Nov 4 05:04:31.001327 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 05:04:31.001327 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 05:04:31.001327 ignition[1123]: INFO : umount: umount passed Nov 4 05:04:31.001327 ignition[1123]: INFO : Ignition finished successfully Nov 4 05:04:31.010619 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 05:04:31.010756 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 05:04:31.022411 systemd[1]: Stopped target network.target - Network. Nov 4 05:04:31.023604 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 05:04:31.023746 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 05:04:31.027686 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 05:04:31.027791 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 05:04:31.032875 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 05:04:31.033118 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 05:04:31.035816 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 05:04:31.035915 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 05:04:31.041212 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 05:04:31.042734 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 05:04:31.064787 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 05:04:31.064974 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 05:04:31.074072 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 05:04:31.074227 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 05:04:31.082622 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 05:04:31.082849 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 05:04:31.089713 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 05:04:31.094305 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 05:04:31.094380 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 05:04:31.098810 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 05:04:31.098941 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 05:04:31.101199 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 05:04:31.105989 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 05:04:31.106116 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 05:04:31.106990 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 05:04:31.107055 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 05:04:31.114737 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 05:04:31.114838 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 05:04:31.115765 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 05:04:31.146320 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 05:04:31.148612 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 05:04:31.155487 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 05:04:31.155642 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 05:04:31.156821 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 05:04:31.156881 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 05:04:31.163397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 05:04:31.163534 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 05:04:31.165762 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 05:04:31.165861 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 05:04:31.172054 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 05:04:31.172163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 05:04:31.181414 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 05:04:31.182431 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 05:04:31.182538 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 05:04:31.183458 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 05:04:31.183528 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 05:04:31.190988 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 05:04:31.191094 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 05:04:31.196519 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 05:04:31.196620 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 05:04:31.197906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 05:04:31.197976 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:04:31.206081 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 05:04:31.206244 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 05:04:31.209062 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 05:04:31.209204 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 05:04:31.213754 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 05:04:31.218289 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 05:04:31.243024 systemd[1]: Switching root. Nov 4 05:04:31.275990 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 4 05:04:31.276061 systemd-journald[317]: Journal stopped Nov 4 05:04:32.903552 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 05:04:32.903632 kernel: SELinux: policy capability open_perms=1 Nov 4 05:04:32.903654 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 05:04:32.903671 kernel: SELinux: policy capability always_check_network=0 Nov 4 05:04:32.903732 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 05:04:32.903750 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 05:04:32.903767 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 05:04:32.903783 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 05:04:32.903805 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 05:04:32.903823 kernel: audit: type=1403 audit(1762232671.911:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 05:04:32.903847 systemd[1]: Successfully loaded SELinux policy in 73.695ms. Nov 4 05:04:32.903909 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.856ms. Nov 4 05:04:32.903930 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 05:04:32.903948 systemd[1]: Detected virtualization kvm. Nov 4 05:04:32.903965 systemd[1]: Detected architecture x86-64. Nov 4 05:04:32.903982 systemd[1]: Detected first boot. Nov 4 05:04:32.904008 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 05:04:32.904026 zram_generator::config[1169]: No configuration found. Nov 4 05:04:32.904058 kernel: Guest personality initialized and is inactive Nov 4 05:04:32.904075 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 05:04:32.904091 kernel: Initialized host personality Nov 4 05:04:32.904108 kernel: NET: Registered PF_VSOCK protocol family Nov 4 05:04:32.904126 systemd[1]: Populated /etc with preset unit settings. Nov 4 05:04:32.904147 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 05:04:32.904165 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 05:04:32.904195 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 05:04:32.904214 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 05:04:32.904232 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 05:04:32.904250 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 05:04:32.904268 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 05:04:32.904285 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 05:04:32.904303 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 05:04:32.904330 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 05:04:32.904347 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 05:04:32.904365 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 05:04:32.904388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 05:04:32.904415 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 05:04:32.904433 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 05:04:32.904451 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 05:04:32.904481 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 05:04:32.904499 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 05:04:32.904517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 05:04:32.904534 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 05:04:32.904552 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 05:04:32.904573 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 05:04:32.904600 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 05:04:32.904618 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 05:04:32.904638 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 05:04:32.904658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 05:04:32.904677 systemd[1]: Reached target slices.target - Slice Units. Nov 4 05:04:32.904694 systemd[1]: Reached target swap.target - Swaps. Nov 4 05:04:32.904711 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 05:04:32.904738 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 05:04:32.904756 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 05:04:32.904774 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 05:04:32.904792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 05:04:32.904809 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 05:04:32.904827 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 05:04:32.904845 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 05:04:32.904872 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 05:04:32.904890 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 05:04:32.904972 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:04:32.904990 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 05:04:32.905008 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 05:04:32.905027 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 05:04:32.905081 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 05:04:32.905112 systemd[1]: Reached target machines.target - Containers. Nov 4 05:04:32.905130 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 05:04:32.905147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 05:04:32.905165 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 05:04:32.905183 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 05:04:32.905201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 05:04:32.905218 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 05:04:32.905246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 05:04:32.905263 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 05:04:32.905286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 05:04:32.905305 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 05:04:32.905322 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 05:04:32.905340 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 05:04:32.905357 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 05:04:32.905385 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 05:04:32.905412 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 05:04:32.905431 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 05:04:32.905448 kernel: fuse: init (API version 7.41) Nov 4 05:04:32.905466 kernel: ACPI: bus type drm_connector registered Nov 4 05:04:32.905483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 05:04:32.905502 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 05:04:32.905533 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 05:04:32.905551 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 05:04:32.905595 systemd-journald[1246]: Collecting audit messages is disabled. Nov 4 05:04:32.905641 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 05:04:32.905659 systemd-journald[1246]: Journal started Nov 4 05:04:32.905689 systemd-journald[1246]: Runtime Journal (/run/log/journal/ee7f27ea02e74368b695eeebdec60fa6) is 6M, max 48.2M, 42.2M free. Nov 4 05:04:32.905736 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:04:32.557089 systemd[1]: Queued start job for default target multi-user.target. Nov 4 05:04:32.577279 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 05:04:32.577888 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 05:04:32.913931 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 05:04:32.917096 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 05:04:32.919350 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 05:04:32.921754 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 05:04:32.923744 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 05:04:32.925969 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 05:04:32.928097 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 05:04:32.930280 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 05:04:32.932833 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 05:04:32.935498 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 05:04:32.935758 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 05:04:32.938277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 05:04:32.938541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 05:04:32.940986 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 05:04:32.941243 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 05:04:32.943505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 05:04:32.943765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 05:04:32.946343 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 05:04:32.946607 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 05:04:32.948969 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 05:04:32.949223 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 05:04:32.951694 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 05:04:32.954250 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 05:04:32.957769 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 05:04:32.960628 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 05:04:32.978316 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 05:04:32.981268 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 05:04:32.985224 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 05:04:32.990070 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 05:04:32.992438 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 05:04:32.992497 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 05:04:32.995968 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 05:04:32.998626 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 05:04:33.003060 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 05:04:33.007154 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 05:04:33.009628 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 05:04:33.010955 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 05:04:33.011877 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 05:04:33.014788 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 05:04:33.019124 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 05:04:33.024156 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 05:04:33.030178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 05:04:33.032343 systemd-journald[1246]: Time spent on flushing to /var/log/journal/ee7f27ea02e74368b695eeebdec60fa6 is 39.334ms for 970 entries. Nov 4 05:04:33.032343 systemd-journald[1246]: System Journal (/var/log/journal/ee7f27ea02e74368b695eeebdec60fa6) is 8M, max 163.5M, 155.5M free. Nov 4 05:04:33.092137 systemd-journald[1246]: Received client request to flush runtime journal. Nov 4 05:04:33.092191 kernel: loop1: detected capacity change from 0 to 119080 Nov 4 05:04:33.034656 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 05:04:33.037290 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 05:04:33.040630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 05:04:33.045489 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 05:04:33.051783 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 05:04:33.056072 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 05:04:33.061218 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Nov 4 05:04:33.061232 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Nov 4 05:04:33.078309 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 05:04:33.082416 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 05:04:33.099368 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 05:04:33.105923 kernel: loop2: detected capacity change from 0 to 219144 Nov 4 05:04:33.114383 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 05:04:33.136888 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 05:04:33.141947 kernel: loop3: detected capacity change from 0 to 111544 Nov 4 05:04:33.141913 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 05:04:33.146170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 05:04:33.163985 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 05:04:33.179707 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Nov 4 05:04:33.179738 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Nov 4 05:04:33.185054 kernel: loop4: detected capacity change from 0 to 119080 Nov 4 05:04:33.186857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 05:04:33.199963 kernel: loop5: detected capacity change from 0 to 219144 Nov 4 05:04:33.209930 kernel: loop6: detected capacity change from 0 to 111544 Nov 4 05:04:33.222708 (sd-merge)[1313]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 05:04:33.227866 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 05:04:33.229712 (sd-merge)[1313]: Merged extensions into '/usr'. Nov 4 05:04:33.237621 systemd[1]: Reload requested from client PID 1288 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 05:04:33.237805 systemd[1]: Reloading... Nov 4 05:04:33.325316 zram_generator::config[1347]: No configuration found. Nov 4 05:04:33.330606 systemd-resolved[1308]: Positive Trust Anchors: Nov 4 05:04:33.331014 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 05:04:33.331026 systemd-resolved[1308]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 05:04:33.331058 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 05:04:33.342806 systemd-resolved[1308]: Defaulting to hostname 'linux'. Nov 4 05:04:33.544606 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 05:04:33.544932 systemd[1]: Reloading finished in 306 ms. Nov 4 05:04:33.576125 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 05:04:33.578664 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 05:04:33.585178 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 05:04:33.629026 systemd[1]: Starting ensure-sysext.service... Nov 4 05:04:33.632035 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 05:04:33.648086 systemd[1]: Reload requested from client PID 1383 ('systemctl') (unit ensure-sysext.service)... Nov 4 05:04:33.648111 systemd[1]: Reloading... Nov 4 05:04:33.734675 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 05:04:33.734732 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 05:04:33.735229 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 05:04:33.735609 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 05:04:33.736876 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 05:04:33.737621 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 4 05:04:33.737720 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Nov 4 05:04:33.746939 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 05:04:33.747174 systemd-tmpfiles[1384]: Skipping /boot Nov 4 05:04:33.774026 zram_generator::config[1414]: No configuration found. Nov 4 05:04:33.800341 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 05:04:33.800360 systemd-tmpfiles[1384]: Skipping /boot Nov 4 05:04:33.986276 systemd[1]: Reloading finished in 337 ms. Nov 4 05:04:34.001079 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 05:04:34.025563 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 05:04:34.039590 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 05:04:34.044269 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 05:04:34.067274 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 05:04:34.070642 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 05:04:34.076110 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 05:04:34.079360 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 05:04:34.085112 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:04:34.085272 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 05:04:34.086809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 05:04:34.091279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 05:04:34.103229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 05:04:34.105489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 05:04:34.105602 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 05:04:34.105741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:04:34.107228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 05:04:34.108048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 05:04:34.113624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 05:04:34.114260 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 05:04:34.120997 systemd-udevd[1458]: Using default interface naming scheme 'v257'. Nov 4 05:04:34.127839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:04:34.128553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 05:04:34.130547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 05:04:34.166365 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 05:04:34.168643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 05:04:34.168934 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 05:04:34.169092 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:04:34.170829 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 05:04:34.171173 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 05:04:34.173973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 05:04:34.187297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 05:04:34.190836 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 05:04:34.193973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 05:04:34.194268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 05:04:34.203177 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 05:04:34.212622 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:04:34.212843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 05:04:34.214681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 05:04:34.218927 augenrules[1490]: No rules Nov 4 05:04:34.217639 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 05:04:34.223141 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 05:04:34.228168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 05:04:34.230001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 05:04:34.230116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 05:04:34.230240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 05:04:34.231274 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 05:04:34.234068 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 05:04:34.234327 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 05:04:34.237751 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 05:04:34.237997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 05:04:34.240810 systemd[1]: Finished ensure-sysext.service. Nov 4 05:04:34.242810 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 05:04:34.247279 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 05:04:34.249487 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 05:04:34.250387 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 05:04:34.269925 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 05:04:34.270168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 05:04:34.278356 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 05:04:34.291850 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 05:04:34.293861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 05:04:34.293977 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 05:04:34.296191 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 05:04:34.298569 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 05:04:34.469509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 05:04:34.472055 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 05:04:34.473453 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 05:04:34.489929 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 05:04:34.498262 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 05:04:34.500979 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 05:04:34.507222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 05:04:34.512205 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 05:04:34.511520 systemd-networkd[1525]: lo: Link UP Nov 4 05:04:34.511537 systemd-networkd[1525]: lo: Gained carrier Nov 4 05:04:34.513812 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 05:04:34.515086 systemd-networkd[1525]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 05:04:34.515162 systemd-networkd[1525]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 05:04:34.516389 systemd[1]: Reached target network.target - Network. Nov 4 05:04:34.517036 systemd-networkd[1525]: eth0: Link UP Nov 4 05:04:34.517853 systemd-networkd[1525]: eth0: Gained carrier Nov 4 05:04:34.519059 systemd-networkd[1525]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 05:04:34.519917 kernel: ACPI: button: Power Button [PWRF] Nov 4 05:04:34.521809 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 05:04:34.526510 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 05:04:34.541025 systemd-networkd[1525]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 05:04:34.542050 systemd-timesyncd[1526]: Network configuration changed, trying to establish connection. Nov 4 05:04:35.055916 systemd-timesyncd[1526]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 05:04:35.056013 systemd-timesyncd[1526]: Initial clock synchronization to Tue 2025-11-04 05:04:35.055770 UTC. Nov 4 05:04:35.056599 systemd-resolved[1308]: Clock change detected. Flushing caches. Nov 4 05:04:35.070981 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 05:04:35.071365 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 05:04:35.072997 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 05:04:35.933993 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1114970494 wd_nsec: 1114970319 Nov 4 05:04:35.962055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 05:04:36.122631 kernel: kvm_amd: TSC scaling supported Nov 4 05:04:36.122692 kernel: kvm_amd: Nested Virtualization enabled Nov 4 05:04:36.122741 kernel: kvm_amd: Nested Paging enabled Nov 4 05:04:36.124002 kernel: kvm_amd: LBR virtualization supported Nov 4 05:04:36.124079 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 4 05:04:36.125739 kernel: kvm_amd: Virtual GIF supported Nov 4 05:04:36.218084 kernel: EDAC MC: Ver: 3.0.0 Nov 4 05:04:36.275084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 05:04:36.290381 ldconfig[1455]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 05:04:36.298200 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 05:04:36.302088 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 05:04:36.364834 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 05:04:36.367238 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 05:04:36.369315 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 05:04:36.371499 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 05:04:36.373721 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 05:04:36.375942 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 05:04:36.377976 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 05:04:36.380222 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 05:04:36.382674 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 05:04:36.382704 systemd[1]: Reached target paths.target - Path Units. Nov 4 05:04:36.384358 systemd[1]: Reached target timers.target - Timer Units. Nov 4 05:04:36.387385 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 05:04:36.391352 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 05:04:36.396134 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 05:04:36.398646 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 05:04:36.400992 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 05:04:36.406937 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 05:04:36.409463 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 05:04:36.412602 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 05:04:36.415459 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 05:04:36.417090 systemd[1]: Reached target basic.target - Basic System. Nov 4 05:04:36.418714 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 05:04:36.418761 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 05:04:36.420030 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 05:04:36.423228 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 05:04:36.426000 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 05:04:36.429167 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 05:04:36.443310 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 05:04:36.445243 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 05:04:36.446388 jq[1578]: false Nov 4 05:04:36.446735 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 05:04:36.450545 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 05:04:36.454989 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 05:04:36.460452 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 05:04:36.463799 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 05:04:36.464538 oslogin_cache_refresh[1580]: Refreshing passwd entry cache Nov 4 05:04:36.466350 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Refreshing passwd entry cache Nov 4 05:04:36.468990 extend-filesystems[1579]: Found /dev/vda6 Nov 4 05:04:36.473919 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 05:04:36.475285 extend-filesystems[1579]: Found /dev/vda9 Nov 4 05:04:36.475936 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 05:04:36.476611 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 05:04:36.477101 oslogin_cache_refresh[1580]: Failure getting users, quitting Nov 4 05:04:36.477382 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Failure getting users, quitting Nov 4 05:04:36.477382 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 05:04:36.477382 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Refreshing group entry cache Nov 4 05:04:36.477130 oslogin_cache_refresh[1580]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 05:04:36.477211 oslogin_cache_refresh[1580]: Refreshing group entry cache Nov 4 05:04:36.477843 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 05:04:36.481074 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 05:04:36.482234 extend-filesystems[1579]: Checking size of /dev/vda9 Nov 4 05:04:36.486117 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Failure getting groups, quitting Nov 4 05:04:36.486117 google_oslogin_nss_cache[1580]: oslogin_cache_refresh[1580]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 05:04:36.484954 oslogin_cache_refresh[1580]: Failure getting groups, quitting Nov 4 05:04:36.484991 oslogin_cache_refresh[1580]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 05:04:36.489870 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 05:04:36.492461 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 05:04:36.492802 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 05:04:36.493210 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 05:04:36.493478 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 05:04:36.495845 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 05:04:36.496427 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 05:04:36.500787 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 05:04:36.502401 jq[1597]: true Nov 4 05:04:36.501451 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 05:04:36.516389 update_engine[1596]: I20251104 05:04:36.516031 1596 main.cc:92] Flatcar Update Engine starting Nov 4 05:04:36.517362 extend-filesystems[1579]: Resized partition /dev/vda9 Nov 4 05:04:36.521359 jq[1606]: true Nov 4 05:04:36.531071 extend-filesystems[1626]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 05:04:36.535207 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 05:04:36.535252 tar[1603]: linux-amd64/LICENSE Nov 4 05:04:36.535252 tar[1603]: linux-amd64/helm Nov 4 05:04:36.587981 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 05:04:36.587658 dbus-daemon[1576]: [system] SELinux support is enabled Nov 4 05:04:36.616302 extend-filesystems[1626]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 05:04:36.616302 extend-filesystems[1626]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 05:04:36.616302 extend-filesystems[1626]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 05:04:36.635590 update_engine[1596]: I20251104 05:04:36.598300 1596 update_check_scheduler.cc:74] Next update check in 8m39s Nov 4 05:04:36.589150 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 05:04:36.635703 extend-filesystems[1579]: Resized filesystem in /dev/vda9 Nov 4 05:04:36.593715 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 05:04:36.593739 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 05:04:36.596062 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 05:04:36.596076 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 05:04:36.601836 systemd[1]: Started update-engine.service - Update Engine. Nov 4 05:04:36.617142 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 05:04:36.618066 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 05:04:36.618435 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 05:04:36.650647 systemd-networkd[1525]: eth0: Gained IPv6LL Nov 4 05:04:36.706635 bash[1643]: Updated "/home/core/.ssh/authorized_keys" Nov 4 05:04:36.722229 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 05:04:36.725634 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 05:04:36.742754 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 05:04:36.742759 systemd-logind[1594]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 05:04:36.742792 systemd-logind[1594]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 05:04:36.749295 systemd-logind[1594]: New seat seat0. Nov 4 05:04:36.749591 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 05:04:36.755829 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 05:04:36.760139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:04:36.764988 sshd_keygen[1624]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 05:04:36.770618 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 05:04:36.784886 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 05:04:36.840930 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 05:04:36.846277 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 05:04:36.851639 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 05:04:36.932342 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 05:04:36.940203 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 05:04:36.940530 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 05:04:36.944936 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 05:04:36.945525 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 05:04:36.945889 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 05:04:36.951341 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 05:04:37.014056 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 05:04:37.019263 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 05:04:37.023263 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 05:04:37.025730 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 05:04:37.146563 containerd[1621]: time="2025-11-04T05:04:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 05:04:37.146929 containerd[1621]: time="2025-11-04T05:04:37.146626590Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 4 05:04:37.162101 containerd[1621]: time="2025-11-04T05:04:37.162048055Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.741µs" Nov 4 05:04:37.162101 containerd[1621]: time="2025-11-04T05:04:37.162091246Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 05:04:37.162201 containerd[1621]: time="2025-11-04T05:04:37.162137352Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 05:04:37.162201 containerd[1621]: time="2025-11-04T05:04:37.162154555Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 05:04:37.162413 containerd[1621]: time="2025-11-04T05:04:37.162372203Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 05:04:37.162439 containerd[1621]: time="2025-11-04T05:04:37.162414352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 05:04:37.162523 containerd[1621]: time="2025-11-04T05:04:37.162499862Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 05:04:37.162546 containerd[1621]: time="2025-11-04T05:04:37.162521452Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 05:04:37.162817 containerd[1621]: time="2025-11-04T05:04:37.162782231Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 05:04:37.162817 containerd[1621]: time="2025-11-04T05:04:37.162802870Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 05:04:37.162866 containerd[1621]: time="2025-11-04T05:04:37.162817508Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 05:04:37.162866 containerd[1621]: time="2025-11-04T05:04:37.162829179Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 05:04:37.163166 containerd[1621]: time="2025-11-04T05:04:37.163128240Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 05:04:37.163166 containerd[1621]: time="2025-11-04T05:04:37.163154640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 05:04:37.163297 containerd[1621]: time="2025-11-04T05:04:37.163271609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 05:04:37.163682 containerd[1621]: time="2025-11-04T05:04:37.163648125Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 05:04:37.163713 containerd[1621]: time="2025-11-04T05:04:37.163692418Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 05:04:37.163713 containerd[1621]: time="2025-11-04T05:04:37.163709019Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 05:04:37.164170 containerd[1621]: time="2025-11-04T05:04:37.164108518Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 05:04:37.165089 containerd[1621]: time="2025-11-04T05:04:37.164645315Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 05:04:37.165089 containerd[1621]: time="2025-11-04T05:04:37.164876348Z" level=info msg="metadata content store policy set" policy=shared Nov 4 05:04:37.170718 containerd[1621]: time="2025-11-04T05:04:37.170642449Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 05:04:37.170846 containerd[1621]: time="2025-11-04T05:04:37.170827847Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 05:04:37.171030 containerd[1621]: time="2025-11-04T05:04:37.171009568Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 05:04:37.171081 containerd[1621]: time="2025-11-04T05:04:37.171068819Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 05:04:37.171130 containerd[1621]: time="2025-11-04T05:04:37.171119173Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 05:04:37.171180 containerd[1621]: time="2025-11-04T05:04:37.171168706Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 05:04:37.171228 containerd[1621]: time="2025-11-04T05:04:37.171216295Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 05:04:37.171503 containerd[1621]: time="2025-11-04T05:04:37.171485059Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 05:04:37.171575 containerd[1621]: time="2025-11-04T05:04:37.171559789Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 05:04:37.171660 containerd[1621]: time="2025-11-04T05:04:37.171643767Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 05:04:37.171722 containerd[1621]: time="2025-11-04T05:04:37.171707176Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 05:04:37.171789 containerd[1621]: time="2025-11-04T05:04:37.171775534Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 05:04:37.171840 containerd[1621]: time="2025-11-04T05:04:37.171828303Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 05:04:37.171892 containerd[1621]: time="2025-11-04T05:04:37.171879869Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 05:04:37.172106 containerd[1621]: time="2025-11-04T05:04:37.172085735Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 05:04:37.172171 containerd[1621]: time="2025-11-04T05:04:37.172157831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 05:04:37.172222 containerd[1621]: time="2025-11-04T05:04:37.172210519Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 05:04:37.172292 containerd[1621]: time="2025-11-04T05:04:37.172279018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 05:04:37.172351 containerd[1621]: time="2025-11-04T05:04:37.172338229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 05:04:37.172410 containerd[1621]: time="2025-11-04T05:04:37.172387832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 05:04:37.172463 containerd[1621]: time="2025-11-04T05:04:37.172450059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 05:04:37.172519 containerd[1621]: time="2025-11-04T05:04:37.172505923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 05:04:37.172573 containerd[1621]: time="2025-11-04T05:04:37.172560285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 05:04:37.172622 containerd[1621]: time="2025-11-04T05:04:37.172611050Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 05:04:37.172680 containerd[1621]: time="2025-11-04T05:04:37.172667326Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 05:04:37.172750 containerd[1621]: time="2025-11-04T05:04:37.172737087Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 05:04:37.173024 containerd[1621]: time="2025-11-04T05:04:37.172976756Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 05:04:37.173145 containerd[1621]: time="2025-11-04T05:04:37.173129202Z" level=info msg="Start snapshots syncer" Nov 4 05:04:37.173239 containerd[1621]: time="2025-11-04T05:04:37.173222487Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 05:04:37.184182 containerd[1621]: time="2025-11-04T05:04:37.184091346Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184427496Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184587266Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184805034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184842995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184855479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184866229Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184878221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184890555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184901335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184912075Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184926532Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184975364Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.184990592Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 05:04:37.185192 containerd[1621]: time="2025-11-04T05:04:37.185000160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 05:04:37.185506 containerd[1621]: time="2025-11-04T05:04:37.185009457Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 05:04:37.185506 containerd[1621]: time="2025-11-04T05:04:37.185022522Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 05:04:37.185506 containerd[1621]: time="2025-11-04T05:04:37.185034224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 05:04:37.185506 containerd[1621]: time="2025-11-04T05:04:37.185045505Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 05:04:37.185506 containerd[1621]: time="2025-11-04T05:04:37.185074960Z" level=info msg="runtime interface created" Nov 4 05:04:37.185506 containerd[1621]: time="2025-11-04T05:04:37.185081613Z" level=info msg="created NRI interface" Nov 4 05:04:37.185506 containerd[1621]: time="2025-11-04T05:04:37.185089998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 05:04:37.185506 containerd[1621]: time="2025-11-04T05:04:37.185106560Z" level=info msg="Connect containerd service" Nov 4 05:04:37.185506 containerd[1621]: time="2025-11-04T05:04:37.185133189Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 05:04:37.187065 containerd[1621]: time="2025-11-04T05:04:37.187043672Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 05:04:37.356139 tar[1603]: linux-amd64/README.md Nov 4 05:04:37.384507 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430203657Z" level=info msg="Start subscribing containerd event" Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430294767Z" level=info msg="Start recovering state" Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430530620Z" level=info msg="Start event monitor" Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430552461Z" level=info msg="Start cni network conf syncer for default" Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430563892Z" level=info msg="Start streaming server" Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430570755Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430642369Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430586885Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430709174Z" level=info msg="runtime interface starting up..." Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430720556Z" level=info msg="starting plugins..." Nov 4 05:04:37.430842 containerd[1621]: time="2025-11-04T05:04:37.430752736Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 05:04:37.431176 containerd[1621]: time="2025-11-04T05:04:37.430992305Z" level=info msg="containerd successfully booted in 0.285784s" Nov 4 05:04:37.431149 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 05:04:38.425460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:04:38.428005 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 05:04:38.430139 systemd[1]: Startup finished in 3.082s (kernel) + 6.005s (initrd) + 6.076s (userspace) = 15.164s. Nov 4 05:04:38.448297 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 05:04:38.864889 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 05:04:38.866327 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:41424.service - OpenSSH per-connection server daemon (10.0.0.1:41424). Nov 4 05:04:39.017686 kubelet[1716]: E1104 05:04:39.017598 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 05:04:39.022089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 05:04:39.022320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 05:04:39.022837 systemd[1]: kubelet.service: Consumed 1.949s CPU time, 259.9M memory peak. Nov 4 05:04:39.064812 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 41424 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:04:39.067572 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:39.079987 systemd-logind[1594]: New session 1 of user core. Nov 4 05:04:39.081712 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 05:04:39.083640 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 05:04:39.129151 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 05:04:39.132218 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 05:04:39.165577 (systemd)[1734]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 05:04:39.168465 systemd-logind[1594]: New session c1 of user core. Nov 4 05:04:39.325296 systemd[1734]: Queued start job for default target default.target. Nov 4 05:04:39.347417 systemd[1734]: Created slice app.slice - User Application Slice. Nov 4 05:04:39.347446 systemd[1734]: Reached target paths.target - Paths. Nov 4 05:04:39.347517 systemd[1734]: Reached target timers.target - Timers. Nov 4 05:04:39.349178 systemd[1734]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 05:04:39.365361 systemd[1734]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 05:04:39.365532 systemd[1734]: Reached target sockets.target - Sockets. Nov 4 05:04:39.365589 systemd[1734]: Reached target basic.target - Basic System. Nov 4 05:04:39.365643 systemd[1734]: Reached target default.target - Main User Target. Nov 4 05:04:39.365698 systemd[1734]: Startup finished in 190ms. Nov 4 05:04:39.366074 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 05:04:39.367853 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 05:04:39.396106 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:41426.service - OpenSSH per-connection server daemon (10.0.0.1:41426). Nov 4 05:04:39.471676 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 41426 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:04:39.473645 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:39.478701 systemd-logind[1594]: New session 2 of user core. Nov 4 05:04:39.489190 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 05:04:39.503774 sshd[1748]: Connection closed by 10.0.0.1 port 41426 Nov 4 05:04:39.504029 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:39.517826 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:41426.service: Deactivated successfully. Nov 4 05:04:39.520316 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 05:04:39.521231 systemd-logind[1594]: Session 2 logged out. Waiting for processes to exit. Nov 4 05:04:39.524047 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:41434.service - OpenSSH per-connection server daemon (10.0.0.1:41434). Nov 4 05:04:39.524806 systemd-logind[1594]: Removed session 2. Nov 4 05:04:39.585338 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 41434 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:04:39.587004 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:39.591653 systemd-logind[1594]: New session 3 of user core. Nov 4 05:04:39.605226 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 05:04:39.615333 sshd[1758]: Connection closed by 10.0.0.1 port 41434 Nov 4 05:04:39.615663 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:39.645055 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:41434.service: Deactivated successfully. Nov 4 05:04:39.648293 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 05:04:39.649792 systemd-logind[1594]: Session 3 logged out. Waiting for processes to exit. Nov 4 05:04:39.654787 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:41442.service - OpenSSH per-connection server daemon (10.0.0.1:41442). Nov 4 05:04:39.655624 systemd-logind[1594]: Removed session 3. Nov 4 05:04:39.724250 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 41442 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:04:39.726040 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:39.730687 systemd-logind[1594]: New session 4 of user core. Nov 4 05:04:39.740167 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 05:04:39.755991 sshd[1767]: Connection closed by 10.0.0.1 port 41442 Nov 4 05:04:39.756394 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:39.772105 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:41442.service: Deactivated successfully. Nov 4 05:04:39.774467 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 05:04:39.775436 systemd-logind[1594]: Session 4 logged out. Waiting for processes to exit. Nov 4 05:04:39.778937 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:41450.service - OpenSSH per-connection server daemon (10.0.0.1:41450). Nov 4 05:04:39.780135 systemd-logind[1594]: Removed session 4. Nov 4 05:04:39.849903 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 41450 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:04:39.851429 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:39.856937 systemd-logind[1594]: New session 5 of user core. Nov 4 05:04:39.867121 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 05:04:39.900604 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 05:04:39.900920 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 05:04:39.919280 sudo[1777]: pam_unix(sudo:session): session closed for user root Nov 4 05:04:39.922920 sshd[1776]: Connection closed by 10.0.0.1 port 41450 Nov 4 05:04:39.923597 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:39.935461 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:41450.service: Deactivated successfully. Nov 4 05:04:39.937838 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 05:04:39.938906 systemd-logind[1594]: Session 5 logged out. Waiting for processes to exit. Nov 4 05:04:39.942389 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:41456.service - OpenSSH per-connection server daemon (10.0.0.1:41456). Nov 4 05:04:39.944150 systemd-logind[1594]: Removed session 5. Nov 4 05:04:40.009107 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 41456 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:04:40.010587 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:40.015293 systemd-logind[1594]: New session 6 of user core. Nov 4 05:04:40.031170 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 05:04:40.048412 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 05:04:40.048847 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 05:04:40.056488 sudo[1788]: pam_unix(sudo:session): session closed for user root Nov 4 05:04:40.064255 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 05:04:40.064582 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 05:04:40.076396 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 05:04:40.134021 augenrules[1810]: No rules Nov 4 05:04:40.135701 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 05:04:40.136054 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 05:04:40.137469 sudo[1787]: pam_unix(sudo:session): session closed for user root Nov 4 05:04:40.139238 sshd[1786]: Connection closed by 10.0.0.1 port 41456 Nov 4 05:04:40.139617 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Nov 4 05:04:40.151495 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:41456.service: Deactivated successfully. Nov 4 05:04:40.153922 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 05:04:40.154908 systemd-logind[1594]: Session 6 logged out. Waiting for processes to exit. Nov 4 05:04:40.157930 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:41472.service - OpenSSH per-connection server daemon (10.0.0.1:41472). Nov 4 05:04:40.158882 systemd-logind[1594]: Removed session 6. Nov 4 05:04:40.236070 sshd[1819]: Accepted publickey for core from 10.0.0.1 port 41472 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:04:40.237840 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:04:40.243080 systemd-logind[1594]: New session 7 of user core. Nov 4 05:04:40.254131 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 05:04:40.268699 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 05:04:40.269053 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 05:04:41.093416 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 05:04:41.119427 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 05:04:41.671877 dockerd[1845]: time="2025-11-04T05:04:41.671773539Z" level=info msg="Starting up" Nov 4 05:04:41.673271 dockerd[1845]: time="2025-11-04T05:04:41.673195886Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 05:04:41.693313 dockerd[1845]: time="2025-11-04T05:04:41.693221564Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 05:04:42.347577 dockerd[1845]: time="2025-11-04T05:04:42.347502951Z" level=info msg="Loading containers: start." Nov 4 05:04:42.359004 kernel: Initializing XFRM netlink socket Nov 4 05:04:42.685516 systemd-networkd[1525]: docker0: Link UP Nov 4 05:04:42.691019 dockerd[1845]: time="2025-11-04T05:04:42.690891224Z" level=info msg="Loading containers: done." Nov 4 05:04:42.756913 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3619950978-merged.mount: Deactivated successfully. Nov 4 05:04:42.758769 dockerd[1845]: time="2025-11-04T05:04:42.758708041Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 05:04:42.758886 dockerd[1845]: time="2025-11-04T05:04:42.758842533Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 05:04:42.758987 dockerd[1845]: time="2025-11-04T05:04:42.758949193Z" level=info msg="Initializing buildkit" Nov 4 05:04:42.794053 dockerd[1845]: time="2025-11-04T05:04:42.794002588Z" level=info msg="Completed buildkit initialization" Nov 4 05:04:42.799183 dockerd[1845]: time="2025-11-04T05:04:42.799124442Z" level=info msg="Daemon has completed initialization" Nov 4 05:04:42.799330 dockerd[1845]: time="2025-11-04T05:04:42.799217346Z" level=info msg="API listen on /run/docker.sock" Nov 4 05:04:42.799508 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 05:04:43.547041 containerd[1621]: time="2025-11-04T05:04:43.546952669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 4 05:04:44.832491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount805220842.mount: Deactivated successfully. Nov 4 05:04:46.318423 containerd[1621]: time="2025-11-04T05:04:46.318330512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:46.319205 containerd[1621]: time="2025-11-04T05:04:46.319149067Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27039787" Nov 4 05:04:46.327587 containerd[1621]: time="2025-11-04T05:04:46.327513180Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:46.339793 containerd[1621]: time="2025-11-04T05:04:46.339746918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:46.344501 containerd[1621]: time="2025-11-04T05:04:46.340978838Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.793951018s" Nov 4 05:04:46.344554 containerd[1621]: time="2025-11-04T05:04:46.344504929Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 4 05:04:46.345670 containerd[1621]: time="2025-11-04T05:04:46.345647962Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 4 05:04:47.922980 containerd[1621]: time="2025-11-04T05:04:47.922865114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:47.923763 containerd[1621]: time="2025-11-04T05:04:47.923705369Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21151604" Nov 4 05:04:47.925151 containerd[1621]: time="2025-11-04T05:04:47.925111897Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:47.928365 containerd[1621]: time="2025-11-04T05:04:47.928318108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:47.929368 containerd[1621]: time="2025-11-04T05:04:47.929303276Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.583625337s" Nov 4 05:04:47.929368 containerd[1621]: time="2025-11-04T05:04:47.929366795Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 4 05:04:47.929930 containerd[1621]: time="2025-11-04T05:04:47.929886689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 4 05:04:48.908598 containerd[1621]: time="2025-11-04T05:04:48.908522331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:48.909923 containerd[1621]: time="2025-11-04T05:04:48.909853206Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=0" Nov 4 05:04:49.024587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 05:04:49.026640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:04:49.303145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:04:49.307457 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 05:04:49.351913 kubelet[2141]: E1104 05:04:49.351796 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 05:04:49.358818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 05:04:49.359066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 05:04:49.359514 systemd[1]: kubelet.service: Consumed 296ms CPU time, 110.3M memory peak. Nov 4 05:04:50.041060 containerd[1621]: time="2025-11-04T05:04:50.040991931Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:50.046429 containerd[1621]: time="2025-11-04T05:04:50.046375445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:50.047412 containerd[1621]: time="2025-11-04T05:04:50.047373176Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 2.117452212s" Nov 4 05:04:50.047478 containerd[1621]: time="2025-11-04T05:04:50.047411869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 4 05:04:50.048195 containerd[1621]: time="2025-11-04T05:04:50.048141346Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 4 05:04:51.384897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657467079.mount: Deactivated successfully. Nov 4 05:04:51.880022 containerd[1621]: time="2025-11-04T05:04:51.879938896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:51.880884 containerd[1621]: time="2025-11-04T05:04:51.880848492Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25960977" Nov 4 05:04:51.882133 containerd[1621]: time="2025-11-04T05:04:51.882060494Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:51.884104 containerd[1621]: time="2025-11-04T05:04:51.884069872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:51.884794 containerd[1621]: time="2025-11-04T05:04:51.884745248Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.836548448s" Nov 4 05:04:51.884794 containerd[1621]: time="2025-11-04T05:04:51.884787087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 4 05:04:51.885442 containerd[1621]: time="2025-11-04T05:04:51.885403552Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 4 05:04:53.814821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365292885.mount: Deactivated successfully. Nov 4 05:04:55.374770 containerd[1621]: time="2025-11-04T05:04:55.374671910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:55.376451 containerd[1621]: time="2025-11-04T05:04:55.376427942Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22253740" Nov 4 05:04:55.377911 containerd[1621]: time="2025-11-04T05:04:55.377844899Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:55.382278 containerd[1621]: time="2025-11-04T05:04:55.382237014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:55.383787 containerd[1621]: time="2025-11-04T05:04:55.383728030Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.498293249s" Nov 4 05:04:55.383787 containerd[1621]: time="2025-11-04T05:04:55.383771752Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 4 05:04:55.384550 containerd[1621]: time="2025-11-04T05:04:55.384325851Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 4 05:04:56.007993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019165481.mount: Deactivated successfully. Nov 4 05:04:56.015358 containerd[1621]: time="2025-11-04T05:04:56.015288856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:56.016195 containerd[1621]: time="2025-11-04T05:04:56.016156684Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Nov 4 05:04:56.017473 containerd[1621]: time="2025-11-04T05:04:56.017414592Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:56.020124 containerd[1621]: time="2025-11-04T05:04:56.020056896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:04:56.021048 containerd[1621]: time="2025-11-04T05:04:56.020984245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 636.623118ms" Nov 4 05:04:56.021114 containerd[1621]: time="2025-11-04T05:04:56.021043847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 4 05:04:56.021768 containerd[1621]: time="2025-11-04T05:04:56.021680200Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 4 05:04:59.524416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 05:04:59.528172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:04:59.858655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:04:59.875284 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 05:05:00.413134 kubelet[2264]: E1104 05:05:00.413050 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 05:05:00.417646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 05:05:00.417874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 05:05:00.418325 systemd[1]: kubelet.service: Consumed 258ms CPU time, 110.6M memory peak. Nov 4 05:05:00.773494 containerd[1621]: time="2025-11-04T05:05:00.773374395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:00.774345 containerd[1621]: time="2025-11-04T05:05:00.774303588Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72350412" Nov 4 05:05:00.775652 containerd[1621]: time="2025-11-04T05:05:00.775602423Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:00.778275 containerd[1621]: time="2025-11-04T05:05:00.778234137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:00.779342 containerd[1621]: time="2025-11-04T05:05:00.779308372Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.757569261s" Nov 4 05:05:00.779409 containerd[1621]: time="2025-11-04T05:05:00.779343818Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 4 05:05:03.666803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:05:03.667084 systemd[1]: kubelet.service: Consumed 258ms CPU time, 110.6M memory peak. Nov 4 05:05:03.669608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:05:03.700069 systemd[1]: Reload requested from client PID 2305 ('systemctl') (unit session-7.scope)... Nov 4 05:05:03.700083 systemd[1]: Reloading... Nov 4 05:05:03.806032 zram_generator::config[2351]: No configuration found. Nov 4 05:05:04.142213 systemd[1]: Reloading finished in 441 ms. Nov 4 05:05:04.213708 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 05:05:04.213821 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 05:05:04.214202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:05:04.214259 systemd[1]: kubelet.service: Consumed 184ms CPU time, 98.1M memory peak. Nov 4 05:05:04.216241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:05:04.402361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:05:04.407240 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 05:05:04.452352 kubelet[2396]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 05:05:04.452352 kubelet[2396]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 05:05:04.452754 kubelet[2396]: I1104 05:05:04.452400 2396 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 05:05:05.142436 kubelet[2396]: I1104 05:05:05.142376 2396 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 05:05:05.142436 kubelet[2396]: I1104 05:05:05.142409 2396 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 05:05:05.144340 kubelet[2396]: I1104 05:05:05.144313 2396 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 05:05:05.144340 kubelet[2396]: I1104 05:05:05.144328 2396 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 05:05:05.144610 kubelet[2396]: I1104 05:05:05.144583 2396 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 05:05:06.355131 kubelet[2396]: E1104 05:05:06.355050 2396 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 05:05:06.356842 kubelet[2396]: I1104 05:05:06.356765 2396 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 05:05:06.361982 kubelet[2396]: I1104 05:05:06.360477 2396 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 05:05:06.365939 kubelet[2396]: I1104 05:05:06.365861 2396 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 05:05:06.367145 kubelet[2396]: I1104 05:05:06.367087 2396 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 05:05:06.367289 kubelet[2396]: I1104 05:05:06.367123 2396 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 05:05:06.367434 kubelet[2396]: I1104 05:05:06.367296 2396 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 05:05:06.367434 kubelet[2396]: I1104 05:05:06.367305 2396 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 05:05:06.367434 kubelet[2396]: I1104 05:05:06.367410 2396 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 05:05:06.371128 kubelet[2396]: I1104 05:05:06.371096 2396 state_mem.go:36] "Initialized new in-memory state store" Nov 4 05:05:06.371316 kubelet[2396]: I1104 05:05:06.371288 2396 kubelet.go:475] "Attempting to sync node with API server" Nov 4 05:05:06.371316 kubelet[2396]: I1104 05:05:06.371304 2396 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 05:05:06.371400 kubelet[2396]: I1104 05:05:06.371332 2396 kubelet.go:387] "Adding apiserver pod source" Nov 4 05:05:06.371400 kubelet[2396]: I1104 05:05:06.371371 2396 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 05:05:06.372229 kubelet[2396]: E1104 05:05:06.372169 2396 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 05:05:06.372623 kubelet[2396]: E1104 05:05:06.372552 2396 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 05:05:06.374287 kubelet[2396]: I1104 05:05:06.374263 2396 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 05:05:06.374777 kubelet[2396]: I1104 05:05:06.374739 2396 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 05:05:06.374777 kubelet[2396]: I1104 05:05:06.374768 2396 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 05:05:06.374868 kubelet[2396]: W1104 05:05:06.374820 2396 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 05:05:06.378412 kubelet[2396]: I1104 05:05:06.378394 2396 server.go:1262] "Started kubelet" Nov 4 05:05:06.378572 kubelet[2396]: I1104 05:05:06.378548 2396 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 05:05:06.379564 kubelet[2396]: I1104 05:05:06.379538 2396 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 05:05:06.387314 kubelet[2396]: I1104 05:05:06.386044 2396 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 05:05:06.387314 kubelet[2396]: I1104 05:05:06.386120 2396 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 05:05:06.387314 kubelet[2396]: I1104 05:05:06.386602 2396 server.go:310] "Adding debug handlers to kubelet server" Nov 4 05:05:06.387314 kubelet[2396]: I1104 05:05:06.386622 2396 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 05:05:06.390038 kubelet[2396]: I1104 05:05:06.389842 2396 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 05:05:06.391829 kubelet[2396]: I1104 05:05:06.391784 2396 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 05:05:06.392278 kubelet[2396]: E1104 05:05:06.392245 2396 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 05:05:06.392391 kubelet[2396]: I1104 05:05:06.392364 2396 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 05:05:06.392486 kubelet[2396]: I1104 05:05:06.392459 2396 reconciler.go:29] "Reconciler: start to sync state" Nov 4 05:05:06.392928 kubelet[2396]: E1104 05:05:06.388877 2396 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874b54f4b41996c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 05:05:06.378340716 +0000 UTC m=+1.966457909,LastTimestamp:2025-11-04 05:05:06.378340716 +0000 UTC m=+1.966457909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 05:05:06.393088 kubelet[2396]: E1104 05:05:06.393054 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Nov 4 05:05:06.394051 kubelet[2396]: E1104 05:05:06.393575 2396 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 05:05:06.394051 kubelet[2396]: I1104 05:05:06.393679 2396 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 05:05:06.394487 kubelet[2396]: E1104 05:05:06.394453 2396 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 05:05:06.397358 kubelet[2396]: I1104 05:05:06.397126 2396 factory.go:223] Registration of the containerd container factory successfully Nov 4 05:05:06.397358 kubelet[2396]: I1104 05:05:06.397158 2396 factory.go:223] Registration of the systemd container factory successfully Nov 4 05:05:06.399103 kubelet[2396]: I1104 05:05:06.399067 2396 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 05:05:06.414780 kubelet[2396]: I1104 05:05:06.414740 2396 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 05:05:06.414780 kubelet[2396]: I1104 05:05:06.414760 2396 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 05:05:06.414780 kubelet[2396]: I1104 05:05:06.414781 2396 state_mem.go:36] "Initialized new in-memory state store" Nov 4 05:05:06.416466 kubelet[2396]: I1104 05:05:06.416432 2396 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 05:05:06.416466 kubelet[2396]: I1104 05:05:06.416466 2396 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 05:05:06.416547 kubelet[2396]: I1104 05:05:06.416494 2396 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 05:05:06.416547 kubelet[2396]: E1104 05:05:06.416533 2396 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 05:05:06.417259 kubelet[2396]: E1104 05:05:06.417222 2396 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 05:05:06.417527 kubelet[2396]: I1104 05:05:06.417487 2396 policy_none.go:49] "None policy: Start" Nov 4 05:05:06.417527 kubelet[2396]: I1104 05:05:06.417522 2396 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 05:05:06.417681 kubelet[2396]: I1104 05:05:06.417540 2396 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 05:05:06.420174 kubelet[2396]: I1104 05:05:06.420143 2396 policy_none.go:47] "Start" Nov 4 05:05:06.424511 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 05:05:06.436473 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 05:05:06.440152 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 05:05:06.465152 kubelet[2396]: E1104 05:05:06.465126 2396 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 05:05:06.465435 kubelet[2396]: I1104 05:05:06.465358 2396 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 05:05:06.465435 kubelet[2396]: I1104 05:05:06.465376 2396 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 05:05:06.465635 kubelet[2396]: I1104 05:05:06.465608 2396 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 05:05:06.466524 kubelet[2396]: E1104 05:05:06.466487 2396 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 05:05:06.466589 kubelet[2396]: E1104 05:05:06.466549 2396 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 05:05:06.535924 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 4 05:05:06.554104 kubelet[2396]: E1104 05:05:06.554061 2396 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 05:05:06.560077 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 4 05:05:06.561924 kubelet[2396]: E1104 05:05:06.561882 2396 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 05:05:06.566878 kubelet[2396]: I1104 05:05:06.566853 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 05:05:06.567403 kubelet[2396]: E1104 05:05:06.567361 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 4 05:05:06.593863 kubelet[2396]: E1104 05:05:06.593772 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Nov 4 05:05:06.598143 systemd[1]: Created slice kubepods-burstable-pod666fc501b9b2afae416bdd51571a1f24.slice - libcontainer container kubepods-burstable-pod666fc501b9b2afae416bdd51571a1f24.slice. Nov 4 05:05:06.599984 kubelet[2396]: E1104 05:05:06.599940 2396 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 05:05:06.694653 kubelet[2396]: I1104 05:05:06.694472 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:06.694653 kubelet[2396]: I1104 05:05:06.694549 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:06.694653 kubelet[2396]: I1104 05:05:06.694570 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:06.694653 kubelet[2396]: I1104 05:05:06.694603 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 4 05:05:06.694864 kubelet[2396]: I1104 05:05:06.694712 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/666fc501b9b2afae416bdd51571a1f24-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"666fc501b9b2afae416bdd51571a1f24\") " pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:06.694864 kubelet[2396]: I1104 05:05:06.694789 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/666fc501b9b2afae416bdd51571a1f24-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"666fc501b9b2afae416bdd51571a1f24\") " pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:06.694864 kubelet[2396]: I1104 05:05:06.694811 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:06.694864 kubelet[2396]: I1104 05:05:06.694843 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:06.694864 kubelet[2396]: I1104 05:05:06.694859 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/666fc501b9b2afae416bdd51571a1f24-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"666fc501b9b2afae416bdd51571a1f24\") " pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:06.769984 kubelet[2396]: I1104 05:05:06.769848 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 05:05:06.770475 kubelet[2396]: E1104 05:05:06.770424 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 4 05:05:06.917719 kubelet[2396]: E1104 05:05:06.917668 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:06.918860 containerd[1621]: time="2025-11-04T05:05:06.918797522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 4 05:05:06.994995 kubelet[2396]: E1104 05:05:06.994710 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Nov 4 05:05:07.055323 kubelet[2396]: E1104 05:05:07.055276 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:07.055880 containerd[1621]: time="2025-11-04T05:05:07.055831170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 4 05:05:07.058571 kubelet[2396]: E1104 05:05:07.058529 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:07.058900 containerd[1621]: time="2025-11-04T05:05:07.058863005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:666fc501b9b2afae416bdd51571a1f24,Namespace:kube-system,Attempt:0,}" Nov 4 05:05:07.171952 kubelet[2396]: I1104 05:05:07.171866 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 05:05:07.172223 kubelet[2396]: E1104 05:05:07.172191 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 4 05:05:07.222367 kubelet[2396]: E1104 05:05:07.222333 2396 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 05:05:07.347687 kubelet[2396]: E1104 05:05:07.347578 2396 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 05:05:07.444132 kubelet[2396]: E1104 05:05:07.444061 2396 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 05:05:07.484798 kubelet[2396]: E1104 05:05:07.484739 2396 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 05:05:07.643772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000280394.mount: Deactivated successfully. Nov 4 05:05:07.650117 containerd[1621]: time="2025-11-04T05:05:07.650069647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:05:07.651723 containerd[1621]: time="2025-11-04T05:05:07.651686078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 05:05:07.656792 containerd[1621]: time="2025-11-04T05:05:07.656737620Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:05:07.657819 containerd[1621]: time="2025-11-04T05:05:07.657768152Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:05:07.659437 containerd[1621]: time="2025-11-04T05:05:07.659397246Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 05:05:07.660786 containerd[1621]: time="2025-11-04T05:05:07.660737099Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:05:07.662403 containerd[1621]: time="2025-11-04T05:05:07.662354511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 05:05:07.662913 containerd[1621]: time="2025-11-04T05:05:07.662864628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 600.186179ms" Nov 4 05:05:07.663435 containerd[1621]: time="2025-11-04T05:05:07.663408067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 05:05:07.666115 containerd[1621]: time="2025-11-04T05:05:07.666072593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 588.990467ms" Nov 4 05:05:07.668271 containerd[1621]: time="2025-11-04T05:05:07.668228846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.885244ms" Nov 4 05:05:07.688763 containerd[1621]: time="2025-11-04T05:05:07.688688427Z" level=info msg="connecting to shim a8ca5c5fd3388d95c9534d46a1e6192f1105d1cf1cbe0e2446bc2974bb61f975" address="unix:///run/containerd/s/7030224df051dbe41d1be1814c7c61d3707d459e72c3c622beedc42ab545f904" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:05:07.703520 containerd[1621]: time="2025-11-04T05:05:07.703438885Z" level=info msg="connecting to shim 718a0031e7ae9c06e09f56a5d4af78c1006feb0b9dcec69f5f850b9cd9cf8785" address="unix:///run/containerd/s/c9b96bfbb6628f09bb92d9ebe80ede78ca1c35c302c044540d6dc828230887b3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:05:07.710940 containerd[1621]: time="2025-11-04T05:05:07.710863987Z" level=info msg="connecting to shim 5373c3d45e152b216cfa71dbb760d886aecf5995bdc6e38d0a318c458cfbe26d" address="unix:///run/containerd/s/13f2f4f2f8f8ccb19766a18b34896a1aea04d7874ee3f497297a535956e019f5" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:05:07.725117 systemd[1]: Started cri-containerd-a8ca5c5fd3388d95c9534d46a1e6192f1105d1cf1cbe0e2446bc2974bb61f975.scope - libcontainer container a8ca5c5fd3388d95c9534d46a1e6192f1105d1cf1cbe0e2446bc2974bb61f975. Nov 4 05:05:07.752097 systemd[1]: Started cri-containerd-5373c3d45e152b216cfa71dbb760d886aecf5995bdc6e38d0a318c458cfbe26d.scope - libcontainer container 5373c3d45e152b216cfa71dbb760d886aecf5995bdc6e38d0a318c458cfbe26d. Nov 4 05:05:07.753935 systemd[1]: Started cri-containerd-718a0031e7ae9c06e09f56a5d4af78c1006feb0b9dcec69f5f850b9cd9cf8785.scope - libcontainer container 718a0031e7ae9c06e09f56a5d4af78c1006feb0b9dcec69f5f850b9cd9cf8785. Nov 4 05:05:07.795523 kubelet[2396]: E1104 05:05:07.795458 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="1.6s" Nov 4 05:05:07.803376 containerd[1621]: time="2025-11-04T05:05:07.803330882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8ca5c5fd3388d95c9534d46a1e6192f1105d1cf1cbe0e2446bc2974bb61f975\"" Nov 4 05:05:07.804645 kubelet[2396]: E1104 05:05:07.804610 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:07.807781 containerd[1621]: time="2025-11-04T05:05:07.807752222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:666fc501b9b2afae416bdd51571a1f24,Namespace:kube-system,Attempt:0,} returns sandbox id \"5373c3d45e152b216cfa71dbb760d886aecf5995bdc6e38d0a318c458cfbe26d\"" Nov 4 05:05:07.808718 kubelet[2396]: E1104 05:05:07.808608 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:07.813785 containerd[1621]: time="2025-11-04T05:05:07.813737394Z" level=info msg="CreateContainer within sandbox \"a8ca5c5fd3388d95c9534d46a1e6192f1105d1cf1cbe0e2446bc2974bb61f975\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 05:05:07.814358 containerd[1621]: time="2025-11-04T05:05:07.814317071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"718a0031e7ae9c06e09f56a5d4af78c1006feb0b9dcec69f5f850b9cd9cf8785\"" Nov 4 05:05:07.815254 kubelet[2396]: E1104 05:05:07.815229 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:07.815864 containerd[1621]: time="2025-11-04T05:05:07.815832693Z" level=info msg="CreateContainer within sandbox \"5373c3d45e152b216cfa71dbb760d886aecf5995bdc6e38d0a318c458cfbe26d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 05:05:07.820881 containerd[1621]: time="2025-11-04T05:05:07.820837246Z" level=info msg="CreateContainer within sandbox \"718a0031e7ae9c06e09f56a5d4af78c1006feb0b9dcec69f5f850b9cd9cf8785\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 05:05:07.827771 containerd[1621]: time="2025-11-04T05:05:07.827723899Z" level=info msg="Container 653a156bbdc1f0f5ab8534a4a73bbb676dcff35b6c8aa7897ffcb1a24f102a7b: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:05:07.830950 containerd[1621]: time="2025-11-04T05:05:07.830908751Z" level=info msg="Container e0c844be37edd8403905a5f3b5218893dd53e00c60e458f622e77c727894291e: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:05:07.841435 containerd[1621]: time="2025-11-04T05:05:07.841396665Z" level=info msg="CreateContainer within sandbox \"5373c3d45e152b216cfa71dbb760d886aecf5995bdc6e38d0a318c458cfbe26d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e0c844be37edd8403905a5f3b5218893dd53e00c60e458f622e77c727894291e\"" Nov 4 05:05:07.842107 containerd[1621]: time="2025-11-04T05:05:07.842074797Z" level=info msg="CreateContainer within sandbox \"a8ca5c5fd3388d95c9534d46a1e6192f1105d1cf1cbe0e2446bc2974bb61f975\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"653a156bbdc1f0f5ab8534a4a73bbb676dcff35b6c8aa7897ffcb1a24f102a7b\"" Nov 4 05:05:07.842276 containerd[1621]: time="2025-11-04T05:05:07.842242471Z" level=info msg="StartContainer for \"e0c844be37edd8403905a5f3b5218893dd53e00c60e458f622e77c727894291e\"" Nov 4 05:05:07.842812 containerd[1621]: time="2025-11-04T05:05:07.842776232Z" level=info msg="Container a39ece56b330fb2a9fc20edf45bbec33ca4f9509bedbd7f04c5492284c301bd1: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:05:07.843036 containerd[1621]: time="2025-11-04T05:05:07.842992568Z" level=info msg="StartContainer for \"653a156bbdc1f0f5ab8534a4a73bbb676dcff35b6c8aa7897ffcb1a24f102a7b\"" Nov 4 05:05:07.843660 containerd[1621]: time="2025-11-04T05:05:07.843600247Z" level=info msg="connecting to shim e0c844be37edd8403905a5f3b5218893dd53e00c60e458f622e77c727894291e" address="unix:///run/containerd/s/13f2f4f2f8f8ccb19766a18b34896a1aea04d7874ee3f497297a535956e019f5" protocol=ttrpc version=3 Nov 4 05:05:07.843983 containerd[1621]: time="2025-11-04T05:05:07.843927080Z" level=info msg="connecting to shim 653a156bbdc1f0f5ab8534a4a73bbb676dcff35b6c8aa7897ffcb1a24f102a7b" address="unix:///run/containerd/s/7030224df051dbe41d1be1814c7c61d3707d459e72c3c622beedc42ab545f904" protocol=ttrpc version=3 Nov 4 05:05:07.854178 containerd[1621]: time="2025-11-04T05:05:07.854128357Z" level=info msg="CreateContainer within sandbox \"718a0031e7ae9c06e09f56a5d4af78c1006feb0b9dcec69f5f850b9cd9cf8785\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a39ece56b330fb2a9fc20edf45bbec33ca4f9509bedbd7f04c5492284c301bd1\"" Nov 4 05:05:07.855358 containerd[1621]: time="2025-11-04T05:05:07.855320933Z" level=info msg="StartContainer for \"a39ece56b330fb2a9fc20edf45bbec33ca4f9509bedbd7f04c5492284c301bd1\"" Nov 4 05:05:07.856979 containerd[1621]: time="2025-11-04T05:05:07.856918819Z" level=info msg="connecting to shim a39ece56b330fb2a9fc20edf45bbec33ca4f9509bedbd7f04c5492284c301bd1" address="unix:///run/containerd/s/c9b96bfbb6628f09bb92d9ebe80ede78ca1c35c302c044540d6dc828230887b3" protocol=ttrpc version=3 Nov 4 05:05:07.867659 systemd[1]: Started cri-containerd-653a156bbdc1f0f5ab8534a4a73bbb676dcff35b6c8aa7897ffcb1a24f102a7b.scope - libcontainer container 653a156bbdc1f0f5ab8534a4a73bbb676dcff35b6c8aa7897ffcb1a24f102a7b. Nov 4 05:05:07.871067 systemd[1]: Started cri-containerd-e0c844be37edd8403905a5f3b5218893dd53e00c60e458f622e77c727894291e.scope - libcontainer container e0c844be37edd8403905a5f3b5218893dd53e00c60e458f622e77c727894291e. Nov 4 05:05:07.889112 systemd[1]: Started cri-containerd-a39ece56b330fb2a9fc20edf45bbec33ca4f9509bedbd7f04c5492284c301bd1.scope - libcontainer container a39ece56b330fb2a9fc20edf45bbec33ca4f9509bedbd7f04c5492284c301bd1. Nov 4 05:05:07.943480 containerd[1621]: time="2025-11-04T05:05:07.942050281Z" level=info msg="StartContainer for \"e0c844be37edd8403905a5f3b5218893dd53e00c60e458f622e77c727894291e\" returns successfully" Nov 4 05:05:07.943480 containerd[1621]: time="2025-11-04T05:05:07.943070874Z" level=info msg="StartContainer for \"653a156bbdc1f0f5ab8534a4a73bbb676dcff35b6c8aa7897ffcb1a24f102a7b\" returns successfully" Nov 4 05:05:07.957417 containerd[1621]: time="2025-11-04T05:05:07.956003253Z" level=info msg="StartContainer for \"a39ece56b330fb2a9fc20edf45bbec33ca4f9509bedbd7f04c5492284c301bd1\" returns successfully" Nov 4 05:05:07.975940 kubelet[2396]: I1104 05:05:07.975493 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 05:05:07.976070 kubelet[2396]: E1104 05:05:07.975944 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 4 05:05:08.425994 kubelet[2396]: E1104 05:05:08.425615 2396 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 05:05:08.425994 kubelet[2396]: E1104 05:05:08.425733 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:08.430539 kubelet[2396]: E1104 05:05:08.430513 2396 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 05:05:08.430986 kubelet[2396]: E1104 05:05:08.430950 2396 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 05:05:08.431069 kubelet[2396]: E1104 05:05:08.431053 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:08.431255 kubelet[2396]: E1104 05:05:08.431239 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:09.435723 kubelet[2396]: E1104 05:05:09.435514 2396 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 05:05:09.435723 kubelet[2396]: E1104 05:05:09.435659 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:09.437322 kubelet[2396]: E1104 05:05:09.437295 2396 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 05:05:09.437513 kubelet[2396]: E1104 05:05:09.437482 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:09.523249 kubelet[2396]: E1104 05:05:09.523184 2396 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 05:05:09.579012 kubelet[2396]: I1104 05:05:09.578435 2396 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 05:05:09.590744 kubelet[2396]: I1104 05:05:09.590704 2396 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 05:05:09.590744 kubelet[2396]: E1104 05:05:09.590748 2396 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 4 05:05:09.601586 kubelet[2396]: E1104 05:05:09.601544 2396 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 05:05:09.694628 kubelet[2396]: I1104 05:05:09.694474 2396 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:09.700076 kubelet[2396]: E1104 05:05:09.700044 2396 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:09.700076 kubelet[2396]: I1104 05:05:09.700070 2396 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 05:05:09.701490 kubelet[2396]: E1104 05:05:09.701434 2396 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 05:05:09.701490 kubelet[2396]: I1104 05:05:09.701474 2396 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:09.703421 kubelet[2396]: E1104 05:05:09.703394 2396 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:09.830002 kubelet[2396]: I1104 05:05:09.829949 2396 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:09.831902 kubelet[2396]: E1104 05:05:09.831874 2396 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:09.832058 kubelet[2396]: E1104 05:05:09.832036 2396 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:10.375987 kubelet[2396]: I1104 05:05:10.375906 2396 apiserver.go:52] "Watching apiserver" Nov 4 05:05:10.392899 kubelet[2396]: I1104 05:05:10.392857 2396 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 05:05:12.362409 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-7.scope)... Nov 4 05:05:12.362433 systemd[1]: Reloading... Nov 4 05:05:12.464001 zram_generator::config[2734]: No configuration found. Nov 4 05:05:12.705375 systemd[1]: Reloading finished in 342 ms. Nov 4 05:05:12.736666 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:05:12.761936 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 05:05:12.762329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:05:12.762398 systemd[1]: kubelet.service: Consumed 1.345s CPU time, 126.1M memory peak. Nov 4 05:05:12.764770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 05:05:13.064969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 05:05:13.077440 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 05:05:13.131519 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 05:05:13.131519 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 05:05:13.132060 kubelet[2773]: I1104 05:05:13.131563 2773 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 05:05:13.140179 kubelet[2773]: I1104 05:05:13.140115 2773 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 05:05:13.140179 kubelet[2773]: I1104 05:05:13.140149 2773 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 05:05:13.140179 kubelet[2773]: I1104 05:05:13.140191 2773 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 05:05:13.140455 kubelet[2773]: I1104 05:05:13.140199 2773 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 05:05:13.140636 kubelet[2773]: I1104 05:05:13.140605 2773 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 05:05:13.142265 kubelet[2773]: I1104 05:05:13.142244 2773 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 05:05:13.204361 kubelet[2773]: I1104 05:05:13.204301 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 05:05:13.210640 kubelet[2773]: I1104 05:05:13.210592 2773 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 05:05:13.216126 kubelet[2773]: I1104 05:05:13.216070 2773 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 05:05:13.216543 kubelet[2773]: I1104 05:05:13.216497 2773 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 05:05:13.216803 kubelet[2773]: I1104 05:05:13.216536 2773 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 05:05:13.216891 kubelet[2773]: I1104 05:05:13.216815 2773 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 05:05:13.216891 kubelet[2773]: I1104 05:05:13.216830 2773 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 05:05:13.216891 kubelet[2773]: I1104 05:05:13.216885 2773 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 05:05:13.217919 kubelet[2773]: I1104 05:05:13.217885 2773 state_mem.go:36] "Initialized new in-memory state store" Nov 4 05:05:13.218193 kubelet[2773]: I1104 05:05:13.218162 2773 kubelet.go:475] "Attempting to sync node with API server" Nov 4 05:05:13.218225 kubelet[2773]: I1104 05:05:13.218194 2773 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 05:05:13.218267 kubelet[2773]: I1104 05:05:13.218234 2773 kubelet.go:387] "Adding apiserver pod source" Nov 4 05:05:13.218434 kubelet[2773]: I1104 05:05:13.218403 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 05:05:13.219901 kubelet[2773]: I1104 05:05:13.219772 2773 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 05:05:13.220469 kubelet[2773]: I1104 05:05:13.220434 2773 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 05:05:13.220510 kubelet[2773]: I1104 05:05:13.220472 2773 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 05:05:13.223225 kubelet[2773]: I1104 05:05:13.223194 2773 server.go:1262] "Started kubelet" Nov 4 05:05:13.225936 kubelet[2773]: I1104 05:05:13.225908 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 05:05:13.228293 kubelet[2773]: I1104 05:05:13.228236 2773 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 05:05:13.229097 kubelet[2773]: I1104 05:05:13.229066 2773 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 05:05:13.229185 kubelet[2773]: I1104 05:05:13.229168 2773 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 05:05:13.229989 kubelet[2773]: I1104 05:05:13.229315 2773 reconciler.go:29] "Reconciler: start to sync state" Nov 4 05:05:13.229989 kubelet[2773]: I1104 05:05:13.229674 2773 factory.go:223] Registration of the systemd container factory successfully Nov 4 05:05:13.229989 kubelet[2773]: I1104 05:05:13.229758 2773 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 05:05:13.230875 kubelet[2773]: E1104 05:05:13.230830 2773 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 05:05:13.231241 kubelet[2773]: I1104 05:05:13.231178 2773 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 05:05:13.231442 kubelet[2773]: I1104 05:05:13.231360 2773 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 05:05:13.231442 kubelet[2773]: I1104 05:05:13.231424 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 05:05:13.232003 kubelet[2773]: I1104 05:05:13.231972 2773 server.go:310] "Adding debug handlers to kubelet server" Nov 4 05:05:13.238319 kubelet[2773]: I1104 05:05:13.238263 2773 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 05:05:13.243332 kubelet[2773]: E1104 05:05:13.243199 2773 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 05:05:13.243636 kubelet[2773]: I1104 05:05:13.243616 2773 factory.go:223] Registration of the containerd container factory successfully Nov 4 05:05:13.260913 kubelet[2773]: I1104 05:05:13.260846 2773 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 05:05:13.265812 kubelet[2773]: I1104 05:05:13.265340 2773 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 05:05:13.265812 kubelet[2773]: I1104 05:05:13.265372 2773 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 05:05:13.265812 kubelet[2773]: I1104 05:05:13.265401 2773 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 05:05:13.265812 kubelet[2773]: E1104 05:05:13.265456 2773 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 05:05:13.293189 kubelet[2773]: I1104 05:05:13.293137 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 05:05:13.293189 kubelet[2773]: I1104 05:05:13.293159 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 05:05:13.293189 kubelet[2773]: I1104 05:05:13.293190 2773 state_mem.go:36] "Initialized new in-memory state store" Nov 4 05:05:13.293383 kubelet[2773]: I1104 05:05:13.293372 2773 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 05:05:13.293408 kubelet[2773]: I1104 05:05:13.293384 2773 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 05:05:13.293408 kubelet[2773]: I1104 05:05:13.293405 2773 policy_none.go:49] "None policy: Start" Nov 4 05:05:13.293449 kubelet[2773]: I1104 05:05:13.293423 2773 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 05:05:13.293449 kubelet[2773]: I1104 05:05:13.293438 2773 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 05:05:13.293562 kubelet[2773]: I1104 05:05:13.293543 2773 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 4 05:05:13.293591 kubelet[2773]: I1104 05:05:13.293569 2773 policy_none.go:47] "Start" Nov 4 05:05:13.298421 kubelet[2773]: E1104 05:05:13.298399 2773 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 05:05:13.298605 kubelet[2773]: I1104 05:05:13.298589 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 05:05:13.298630 kubelet[2773]: I1104 05:05:13.298605 2773 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 05:05:13.299011 kubelet[2773]: I1104 05:05:13.298898 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 05:05:13.300883 kubelet[2773]: E1104 05:05:13.300517 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 05:05:13.368266 kubelet[2773]: I1104 05:05:13.367277 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:13.368266 kubelet[2773]: I1104 05:05:13.367923 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:13.397093 kubelet[2773]: I1104 05:05:13.397053 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 05:05:13.403893 kubelet[2773]: I1104 05:05:13.403831 2773 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 05:05:13.431221 kubelet[2773]: I1104 05:05:13.431168 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/666fc501b9b2afae416bdd51571a1f24-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"666fc501b9b2afae416bdd51571a1f24\") " pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:13.431221 kubelet[2773]: I1104 05:05:13.431213 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:13.431221 kubelet[2773]: I1104 05:05:13.431233 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:13.431410 kubelet[2773]: I1104 05:05:13.431250 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 4 05:05:13.431410 kubelet[2773]: I1104 05:05:13.431265 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/666fc501b9b2afae416bdd51571a1f24-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"666fc501b9b2afae416bdd51571a1f24\") " pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:13.431410 kubelet[2773]: I1104 05:05:13.431341 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/666fc501b9b2afae416bdd51571a1f24-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"666fc501b9b2afae416bdd51571a1f24\") " pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:13.431410 kubelet[2773]: I1104 05:05:13.431408 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:13.431503 kubelet[2773]: I1104 05:05:13.431429 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:13.431503 kubelet[2773]: I1104 05:05:13.431450 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:13.533923 kubelet[2773]: E1104 05:05:13.533837 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:13.535058 kubelet[2773]: E1104 05:05:13.535022 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:13.536493 kubelet[2773]: I1104 05:05:13.536450 2773 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 05:05:13.536686 kubelet[2773]: I1104 05:05:13.536538 2773 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 05:05:13.536686 kubelet[2773]: E1104 05:05:13.536554 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:14.220215 kubelet[2773]: I1104 05:05:14.220164 2773 apiserver.go:52] "Watching apiserver" Nov 4 05:05:14.229570 kubelet[2773]: I1104 05:05:14.229488 2773 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 05:05:14.283004 kubelet[2773]: I1104 05:05:14.282869 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 05:05:14.283004 kubelet[2773]: I1104 05:05:14.282908 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:14.283243 kubelet[2773]: I1104 05:05:14.282981 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:14.347753 kubelet[2773]: E1104 05:05:14.347651 2773 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 4 05:05:14.347937 kubelet[2773]: E1104 05:05:14.347895 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:14.348501 kubelet[2773]: E1104 05:05:14.348468 2773 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 4 05:05:14.348655 kubelet[2773]: E1104 05:05:14.348584 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:14.349414 kubelet[2773]: E1104 05:05:14.349382 2773 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 05:05:14.349533 kubelet[2773]: E1104 05:05:14.349512 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:14.511329 kubelet[2773]: I1104 05:05:14.511145 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5111140650000001 podStartE2EDuration="1.511114065s" podCreationTimestamp="2025-11-04 05:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:05:14.49902972 +0000 UTC m=+1.415773019" watchObservedRunningTime="2025-11-04 05:05:14.511114065 +0000 UTC m=+1.427857364" Nov 4 05:05:14.525154 kubelet[2773]: I1104 05:05:14.525054 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.525030382 podStartE2EDuration="1.525030382s" podCreationTimestamp="2025-11-04 05:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:05:14.52400797 +0000 UTC m=+1.440751279" watchObservedRunningTime="2025-11-04 05:05:14.525030382 +0000 UTC m=+1.441773691" Nov 4 05:05:14.525357 kubelet[2773]: I1104 05:05:14.525165 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.525154599 podStartE2EDuration="1.525154599s" podCreationTimestamp="2025-11-04 05:05:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:05:14.511292795 +0000 UTC m=+1.428036104" watchObservedRunningTime="2025-11-04 05:05:14.525154599 +0000 UTC m=+1.441897898" Nov 4 05:05:15.284144 kubelet[2773]: E1104 05:05:15.284086 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:15.284668 kubelet[2773]: E1104 05:05:15.284318 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:15.284668 kubelet[2773]: E1104 05:05:15.284407 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:18.590625 kubelet[2773]: I1104 05:05:18.590563 2773 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 05:05:18.591277 containerd[1621]: time="2025-11-04T05:05:18.591112344Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 05:05:18.591600 kubelet[2773]: I1104 05:05:18.591380 2773 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 05:05:19.044871 kubelet[2773]: E1104 05:05:19.044726 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:19.291305 kubelet[2773]: E1104 05:05:19.291251 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:19.706637 systemd[1]: Created slice kubepods-besteffort-pod6b21f6c2_3a26_43c1_ad05_ccde43068094.slice - libcontainer container kubepods-besteffort-pod6b21f6c2_3a26_43c1_ad05_ccde43068094.slice. Nov 4 05:05:19.760579 systemd[1]: Created slice kubepods-besteffort-pod94a830ac_02f1_4616_bbbb_cec254ab1f56.slice - libcontainer container kubepods-besteffort-pod94a830ac_02f1_4616_bbbb_cec254ab1f56.slice. Nov 4 05:05:19.767849 kubelet[2773]: I1104 05:05:19.767778 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b21f6c2-3a26-43c1-ad05-ccde43068094-xtables-lock\") pod \"kube-proxy-jhl7x\" (UID: \"6b21f6c2-3a26-43c1-ad05-ccde43068094\") " pod="kube-system/kube-proxy-jhl7x" Nov 4 05:05:19.767849 kubelet[2773]: I1104 05:05:19.767835 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b21f6c2-3a26-43c1-ad05-ccde43068094-lib-modules\") pod \"kube-proxy-jhl7x\" (UID: \"6b21f6c2-3a26-43c1-ad05-ccde43068094\") " pod="kube-system/kube-proxy-jhl7x" Nov 4 05:05:19.768341 kubelet[2773]: I1104 05:05:19.767855 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b21f6c2-3a26-43c1-ad05-ccde43068094-kube-proxy\") pod \"kube-proxy-jhl7x\" (UID: \"6b21f6c2-3a26-43c1-ad05-ccde43068094\") " pod="kube-system/kube-proxy-jhl7x" Nov 4 05:05:19.768341 kubelet[2773]: I1104 05:05:19.767895 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/94a830ac-02f1-4616-bbbb-cec254ab1f56-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-mjqk7\" (UID: \"94a830ac-02f1-4616-bbbb-cec254ab1f56\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mjqk7" Nov 4 05:05:19.768341 kubelet[2773]: I1104 05:05:19.767915 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfstv\" (UniqueName: \"kubernetes.io/projected/94a830ac-02f1-4616-bbbb-cec254ab1f56-kube-api-access-qfstv\") pod \"tigera-operator-65cdcdfd6d-mjqk7\" (UID: \"94a830ac-02f1-4616-bbbb-cec254ab1f56\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mjqk7" Nov 4 05:05:19.768341 kubelet[2773]: I1104 05:05:19.767931 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dtw4\" (UniqueName: \"kubernetes.io/projected/6b21f6c2-3a26-43c1-ad05-ccde43068094-kube-api-access-4dtw4\") pod \"kube-proxy-jhl7x\" (UID: \"6b21f6c2-3a26-43c1-ad05-ccde43068094\") " pod="kube-system/kube-proxy-jhl7x" Nov 4 05:05:20.016769 kubelet[2773]: E1104 05:05:20.016647 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:20.017509 containerd[1621]: time="2025-11-04T05:05:20.017460366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jhl7x,Uid:6b21f6c2-3a26-43c1-ad05-ccde43068094,Namespace:kube-system,Attempt:0,}" Nov 4 05:05:20.040769 containerd[1621]: time="2025-11-04T05:05:20.040719647Z" level=info msg="connecting to shim 2b5e8d13407f1d70f944e7b859e7fbe7e1a2c21d0bcc469ca391db5944fcecf0" address="unix:///run/containerd/s/f1065725ffa6f063a7b21bf29e1fbf2ddcb2d1666ab88a29260faa72fb1d88d6" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:05:20.068224 containerd[1621]: time="2025-11-04T05:05:20.068183461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mjqk7,Uid:94a830ac-02f1-4616-bbbb-cec254ab1f56,Namespace:tigera-operator,Attempt:0,}" Nov 4 05:05:20.070131 systemd[1]: Started cri-containerd-2b5e8d13407f1d70f944e7b859e7fbe7e1a2c21d0bcc469ca391db5944fcecf0.scope - libcontainer container 2b5e8d13407f1d70f944e7b859e7fbe7e1a2c21d0bcc469ca391db5944fcecf0. Nov 4 05:05:20.091107 containerd[1621]: time="2025-11-04T05:05:20.091036905Z" level=info msg="connecting to shim e55873f7a291c3b2e4b576c82d82ba3fe8a6c69bc946e43485c20f7ff1377018" address="unix:///run/containerd/s/7a894f7e3cc1b64b5a25b3ecbf03b21a4e944c78e70063a0ad0ce8340a355330" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:05:20.109551 containerd[1621]: time="2025-11-04T05:05:20.109494723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jhl7x,Uid:6b21f6c2-3a26-43c1-ad05-ccde43068094,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b5e8d13407f1d70f944e7b859e7fbe7e1a2c21d0bcc469ca391db5944fcecf0\"" Nov 4 05:05:20.114356 kubelet[2773]: E1104 05:05:20.114004 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:20.117371 systemd[1]: Started cri-containerd-e55873f7a291c3b2e4b576c82d82ba3fe8a6c69bc946e43485c20f7ff1377018.scope - libcontainer container e55873f7a291c3b2e4b576c82d82ba3fe8a6c69bc946e43485c20f7ff1377018. Nov 4 05:05:20.125981 containerd[1621]: time="2025-11-04T05:05:20.125922630Z" level=info msg="CreateContainer within sandbox \"2b5e8d13407f1d70f944e7b859e7fbe7e1a2c21d0bcc469ca391db5944fcecf0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 05:05:20.140603 containerd[1621]: time="2025-11-04T05:05:20.140562506Z" level=info msg="Container ebaa2532c21334ed44f822abc796c6d910631a03c5a0d900d5fcb1b779030b07: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:05:20.150953 containerd[1621]: time="2025-11-04T05:05:20.150910501Z" level=info msg="CreateContainer within sandbox \"2b5e8d13407f1d70f944e7b859e7fbe7e1a2c21d0bcc469ca391db5944fcecf0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ebaa2532c21334ed44f822abc796c6d910631a03c5a0d900d5fcb1b779030b07\"" Nov 4 05:05:20.152511 containerd[1621]: time="2025-11-04T05:05:20.152469029Z" level=info msg="StartContainer for \"ebaa2532c21334ed44f822abc796c6d910631a03c5a0d900d5fcb1b779030b07\"" Nov 4 05:05:20.154425 containerd[1621]: time="2025-11-04T05:05:20.154397668Z" level=info msg="connecting to shim ebaa2532c21334ed44f822abc796c6d910631a03c5a0d900d5fcb1b779030b07" address="unix:///run/containerd/s/f1065725ffa6f063a7b21bf29e1fbf2ddcb2d1666ab88a29260faa72fb1d88d6" protocol=ttrpc version=3 Nov 4 05:05:20.167778 containerd[1621]: time="2025-11-04T05:05:20.167730110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mjqk7,Uid:94a830ac-02f1-4616-bbbb-cec254ab1f56,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e55873f7a291c3b2e4b576c82d82ba3fe8a6c69bc946e43485c20f7ff1377018\"" Nov 4 05:05:20.171545 containerd[1621]: time="2025-11-04T05:05:20.171441090Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 05:05:20.182216 systemd[1]: Started cri-containerd-ebaa2532c21334ed44f822abc796c6d910631a03c5a0d900d5fcb1b779030b07.scope - libcontainer container ebaa2532c21334ed44f822abc796c6d910631a03c5a0d900d5fcb1b779030b07. Nov 4 05:05:20.228977 containerd[1621]: time="2025-11-04T05:05:20.228906391Z" level=info msg="StartContainer for \"ebaa2532c21334ed44f822abc796c6d910631a03c5a0d900d5fcb1b779030b07\" returns successfully" Nov 4 05:05:20.297456 kubelet[2773]: E1104 05:05:20.297380 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:20.305988 kubelet[2773]: I1104 05:05:20.305850 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jhl7x" podStartSLOduration=1.305830141 podStartE2EDuration="1.305830141s" podCreationTimestamp="2025-11-04 05:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:05:20.305654539 +0000 UTC m=+7.222397848" watchObservedRunningTime="2025-11-04 05:05:20.305830141 +0000 UTC m=+7.222573440" Nov 4 05:05:20.537045 kubelet[2773]: E1104 05:05:20.536948 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:20.871566 kubelet[2773]: E1104 05:05:20.871541 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:21.299475 kubelet[2773]: E1104 05:05:21.299413 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:21.299933 kubelet[2773]: E1104 05:05:21.299912 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:22.209161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980526464.mount: Deactivated successfully. Nov 4 05:05:22.268470 update_engine[1596]: I20251104 05:05:22.268361 1596 update_attempter.cc:509] Updating boot flags... Nov 4 05:05:22.302454 kubelet[2773]: E1104 05:05:22.302408 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:23.308993 containerd[1621]: time="2025-11-04T05:05:23.308890631Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:23.310142 containerd[1621]: time="2025-11-04T05:05:23.310043308Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23560304" Nov 4 05:05:23.311635 containerd[1621]: time="2025-11-04T05:05:23.311586853Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:23.314249 containerd[1621]: time="2025-11-04T05:05:23.314190862Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:23.314661 containerd[1621]: time="2025-11-04T05:05:23.314632656Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.143110093s" Nov 4 05:05:23.314712 containerd[1621]: time="2025-11-04T05:05:23.314664988Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 05:05:23.321776 containerd[1621]: time="2025-11-04T05:05:23.321737455Z" level=info msg="CreateContainer within sandbox \"e55873f7a291c3b2e4b576c82d82ba3fe8a6c69bc946e43485c20f7ff1377018\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 05:05:23.333075 containerd[1621]: time="2025-11-04T05:05:23.332999688Z" level=info msg="Container 5e6133f85865f8b4e34d9d266a870ca121538caf144c3e5a2bcd0a0e5daf25f3: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:05:23.342322 containerd[1621]: time="2025-11-04T05:05:23.342252004Z" level=info msg="CreateContainer within sandbox \"e55873f7a291c3b2e4b576c82d82ba3fe8a6c69bc946e43485c20f7ff1377018\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5e6133f85865f8b4e34d9d266a870ca121538caf144c3e5a2bcd0a0e5daf25f3\"" Nov 4 05:05:23.343042 containerd[1621]: time="2025-11-04T05:05:23.342992061Z" level=info msg="StartContainer for \"5e6133f85865f8b4e34d9d266a870ca121538caf144c3e5a2bcd0a0e5daf25f3\"" Nov 4 05:05:23.344105 containerd[1621]: time="2025-11-04T05:05:23.344060740Z" level=info msg="connecting to shim 5e6133f85865f8b4e34d9d266a870ca121538caf144c3e5a2bcd0a0e5daf25f3" address="unix:///run/containerd/s/7a894f7e3cc1b64b5a25b3ecbf03b21a4e944c78e70063a0ad0ce8340a355330" protocol=ttrpc version=3 Nov 4 05:05:23.395133 systemd[1]: Started cri-containerd-5e6133f85865f8b4e34d9d266a870ca121538caf144c3e5a2bcd0a0e5daf25f3.scope - libcontainer container 5e6133f85865f8b4e34d9d266a870ca121538caf144c3e5a2bcd0a0e5daf25f3. Nov 4 05:05:23.439912 containerd[1621]: time="2025-11-04T05:05:23.439809070Z" level=info msg="StartContainer for \"5e6133f85865f8b4e34d9d266a870ca121538caf144c3e5a2bcd0a0e5daf25f3\" returns successfully" Nov 4 05:05:24.316028 kubelet[2773]: I1104 05:05:24.315911 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-mjqk7" podStartSLOduration=2.16953057 podStartE2EDuration="5.315269684s" podCreationTimestamp="2025-11-04 05:05:19 +0000 UTC" firstStartedPulling="2025-11-04 05:05:20.16957488 +0000 UTC m=+7.086318179" lastFinishedPulling="2025-11-04 05:05:23.315314004 +0000 UTC m=+10.232057293" observedRunningTime="2025-11-04 05:05:24.315195323 +0000 UTC m=+11.231938632" watchObservedRunningTime="2025-11-04 05:05:24.315269684 +0000 UTC m=+11.232012983" Nov 4 05:05:29.363551 sudo[1823]: pam_unix(sudo:session): session closed for user root Nov 4 05:05:29.365948 sshd[1822]: Connection closed by 10.0.0.1 port 41472 Nov 4 05:05:29.366684 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Nov 4 05:05:29.374716 systemd-logind[1594]: Session 7 logged out. Waiting for processes to exit. Nov 4 05:05:29.378300 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:41472.service: Deactivated successfully. Nov 4 05:05:29.384888 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 05:05:29.385774 systemd[1]: session-7.scope: Consumed 5.839s CPU time, 224.6M memory peak. Nov 4 05:05:29.390940 systemd-logind[1594]: Removed session 7. Nov 4 05:05:33.570093 systemd[1]: Created slice kubepods-besteffort-podee62c7ab_cf0c_4819_a97c_52adb4c94f78.slice - libcontainer container kubepods-besteffort-podee62c7ab_cf0c_4819_a97c_52adb4c94f78.slice. Nov 4 05:05:33.654230 kubelet[2773]: I1104 05:05:33.654167 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee62c7ab-cf0c-4819-a97c-52adb4c94f78-tigera-ca-bundle\") pod \"calico-typha-545d4c5cb4-7vbps\" (UID: \"ee62c7ab-cf0c-4819-a97c-52adb4c94f78\") " pod="calico-system/calico-typha-545d4c5cb4-7vbps" Nov 4 05:05:33.654770 kubelet[2773]: I1104 05:05:33.654237 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drwpm\" (UniqueName: \"kubernetes.io/projected/ee62c7ab-cf0c-4819-a97c-52adb4c94f78-kube-api-access-drwpm\") pod \"calico-typha-545d4c5cb4-7vbps\" (UID: \"ee62c7ab-cf0c-4819-a97c-52adb4c94f78\") " pod="calico-system/calico-typha-545d4c5cb4-7vbps" Nov 4 05:05:33.654770 kubelet[2773]: I1104 05:05:33.654274 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ee62c7ab-cf0c-4819-a97c-52adb4c94f78-typha-certs\") pod \"calico-typha-545d4c5cb4-7vbps\" (UID: \"ee62c7ab-cf0c-4819-a97c-52adb4c94f78\") " pod="calico-system/calico-typha-545d4c5cb4-7vbps" Nov 4 05:05:33.768000 systemd[1]: Created slice kubepods-besteffort-poda117bc5c_2817_4dd2_ae48_feae21a3a851.slice - libcontainer container kubepods-besteffort-poda117bc5c_2817_4dd2_ae48_feae21a3a851.slice. Nov 4 05:05:33.826086 kubelet[2773]: E1104 05:05:33.825099 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:05:33.855838 kubelet[2773]: I1104 05:05:33.855752 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a117bc5c-2817-4dd2-ae48-feae21a3a851-cni-log-dir\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.855838 kubelet[2773]: I1104 05:05:33.855803 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a117bc5c-2817-4dd2-ae48-feae21a3a851-var-lib-calico\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.855838 kubelet[2773]: I1104 05:05:33.855826 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a117bc5c-2817-4dd2-ae48-feae21a3a851-node-certs\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.855838 kubelet[2773]: I1104 05:05:33.855843 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/91e262bf-e00e-40d5-b480-4f480c906f2e-kubelet-dir\") pod \"csi-node-driver-m9ml2\" (UID: \"91e262bf-e00e-40d5-b480-4f480c906f2e\") " pod="calico-system/csi-node-driver-m9ml2" Nov 4 05:05:33.856093 kubelet[2773]: I1104 05:05:33.855859 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a117bc5c-2817-4dd2-ae48-feae21a3a851-flexvol-driver-host\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.856093 kubelet[2773]: I1104 05:05:33.855876 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/91e262bf-e00e-40d5-b480-4f480c906f2e-registration-dir\") pod \"csi-node-driver-m9ml2\" (UID: \"91e262bf-e00e-40d5-b480-4f480c906f2e\") " pod="calico-system/csi-node-driver-m9ml2" Nov 4 05:05:33.856093 kubelet[2773]: I1104 05:05:33.855888 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/91e262bf-e00e-40d5-b480-4f480c906f2e-varrun\") pod \"csi-node-driver-m9ml2\" (UID: \"91e262bf-e00e-40d5-b480-4f480c906f2e\") " pod="calico-system/csi-node-driver-m9ml2" Nov 4 05:05:33.856093 kubelet[2773]: I1104 05:05:33.855905 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lppk5\" (UniqueName: \"kubernetes.io/projected/91e262bf-e00e-40d5-b480-4f480c906f2e-kube-api-access-lppk5\") pod \"csi-node-driver-m9ml2\" (UID: \"91e262bf-e00e-40d5-b480-4f480c906f2e\") " pod="calico-system/csi-node-driver-m9ml2" Nov 4 05:05:33.856093 kubelet[2773]: I1104 05:05:33.855921 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a117bc5c-2817-4dd2-ae48-feae21a3a851-cni-bin-dir\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.856211 kubelet[2773]: I1104 05:05:33.855936 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a117bc5c-2817-4dd2-ae48-feae21a3a851-policysync\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.856517 kubelet[2773]: I1104 05:05:33.856482 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a117bc5c-2817-4dd2-ae48-feae21a3a851-tigera-ca-bundle\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.856517 kubelet[2773]: I1104 05:05:33.856510 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/91e262bf-e00e-40d5-b480-4f480c906f2e-socket-dir\") pod \"csi-node-driver-m9ml2\" (UID: \"91e262bf-e00e-40d5-b480-4f480c906f2e\") " pod="calico-system/csi-node-driver-m9ml2" Nov 4 05:05:33.856609 kubelet[2773]: I1104 05:05:33.856588 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a117bc5c-2817-4dd2-ae48-feae21a3a851-lib-modules\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.856643 kubelet[2773]: I1104 05:05:33.856606 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a117bc5c-2817-4dd2-ae48-feae21a3a851-var-run-calico\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.856643 kubelet[2773]: I1104 05:05:33.856624 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb2zc\" (UniqueName: \"kubernetes.io/projected/a117bc5c-2817-4dd2-ae48-feae21a3a851-kube-api-access-gb2zc\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.856719 kubelet[2773]: I1104 05:05:33.856704 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a117bc5c-2817-4dd2-ae48-feae21a3a851-cni-net-dir\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.856745 kubelet[2773]: I1104 05:05:33.856721 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a117bc5c-2817-4dd2-ae48-feae21a3a851-xtables-lock\") pod \"calico-node-w5r5r\" (UID: \"a117bc5c-2817-4dd2-ae48-feae21a3a851\") " pod="calico-system/calico-node-w5r5r" Nov 4 05:05:33.878759 kubelet[2773]: E1104 05:05:33.878725 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:33.879357 containerd[1621]: time="2025-11-04T05:05:33.879308455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545d4c5cb4-7vbps,Uid:ee62c7ab-cf0c-4819-a97c-52adb4c94f78,Namespace:calico-system,Attempt:0,}" Nov 4 05:05:33.928880 containerd[1621]: time="2025-11-04T05:05:33.928819316Z" level=info msg="connecting to shim cfbeaadb49123d52c32d9ca0fe1a20a214cc79c2cb56e5150db0bbbe55f609f5" address="unix:///run/containerd/s/1d377364838d3cc5cf195eb8e70b9422a461da524880b07d6331a141aee30b55" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:05:33.961275 systemd[1]: Started cri-containerd-cfbeaadb49123d52c32d9ca0fe1a20a214cc79c2cb56e5150db0bbbe55f609f5.scope - libcontainer container cfbeaadb49123d52c32d9ca0fe1a20a214cc79c2cb56e5150db0bbbe55f609f5. Nov 4 05:05:33.965313 kubelet[2773]: E1104 05:05:33.965274 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:33.965864 kubelet[2773]: W1104 05:05:33.965798 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:33.965978 kubelet[2773]: E1104 05:05:33.965948 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:33.987332 kubelet[2773]: E1104 05:05:33.987274 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:33.987332 kubelet[2773]: W1104 05:05:33.987310 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:33.987332 kubelet[2773]: E1104 05:05:33.987337 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:33.988802 kubelet[2773]: E1104 05:05:33.988718 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:33.988802 kubelet[2773]: W1104 05:05:33.988763 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:33.988802 kubelet[2773]: E1104 05:05:33.988790 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:34.118072 containerd[1621]: time="2025-11-04T05:05:34.117916620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545d4c5cb4-7vbps,Uid:ee62c7ab-cf0c-4819-a97c-52adb4c94f78,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfbeaadb49123d52c32d9ca0fe1a20a214cc79c2cb56e5150db0bbbe55f609f5\"" Nov 4 05:05:34.119298 kubelet[2773]: E1104 05:05:34.119236 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:34.119833 containerd[1621]: time="2025-11-04T05:05:34.119796338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w5r5r,Uid:a117bc5c-2817-4dd2-ae48-feae21a3a851,Namespace:calico-system,Attempt:0,}" Nov 4 05:05:34.125718 kubelet[2773]: E1104 05:05:34.125663 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:34.131173 containerd[1621]: time="2025-11-04T05:05:34.131114436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 05:05:34.165093 containerd[1621]: time="2025-11-04T05:05:34.165020439Z" level=info msg="connecting to shim fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7" address="unix:///run/containerd/s/e45f9edd152473f4629417f87b7cbd305e84c2423bd7cddce4a79825150061d9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:05:34.192160 systemd[1]: Started cri-containerd-fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7.scope - libcontainer container fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7. Nov 4 05:05:34.219406 containerd[1621]: time="2025-11-04T05:05:34.219353593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w5r5r,Uid:a117bc5c-2817-4dd2-ae48-feae21a3a851,Namespace:calico-system,Attempt:0,} returns sandbox id \"fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7\"" Nov 4 05:05:34.220064 kubelet[2773]: E1104 05:05:34.220040 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:35.266148 kubelet[2773]: E1104 05:05:35.266079 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:05:36.643625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632889655.mount: Deactivated successfully. Nov 4 05:05:37.266539 kubelet[2773]: E1104 05:05:37.266470 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:05:39.266412 kubelet[2773]: E1104 05:05:39.266339 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:05:39.890351 containerd[1621]: time="2025-11-04T05:05:39.890294612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:39.891289 containerd[1621]: time="2025-11-04T05:05:39.891225442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 4 05:05:39.892453 containerd[1621]: time="2025-11-04T05:05:39.892412203Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:39.894393 containerd[1621]: time="2025-11-04T05:05:39.894349264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:39.895001 containerd[1621]: time="2025-11-04T05:05:39.894947929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 5.763779713s" Nov 4 05:05:39.895047 containerd[1621]: time="2025-11-04T05:05:39.895008364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 05:05:39.896084 containerd[1621]: time="2025-11-04T05:05:39.896052637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 05:05:39.911370 containerd[1621]: time="2025-11-04T05:05:39.911013070Z" level=info msg="CreateContainer within sandbox \"cfbeaadb49123d52c32d9ca0fe1a20a214cc79c2cb56e5150db0bbbe55f609f5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 05:05:39.922461 containerd[1621]: time="2025-11-04T05:05:39.922412310Z" level=info msg="Container 905b4d23f0cefd8fa38065b19cbf4ca50347f25ee901778632c8fb0707e60b7e: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:05:40.031699 containerd[1621]: time="2025-11-04T05:05:40.031594805Z" level=info msg="CreateContainer within sandbox \"cfbeaadb49123d52c32d9ca0fe1a20a214cc79c2cb56e5150db0bbbe55f609f5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"905b4d23f0cefd8fa38065b19cbf4ca50347f25ee901778632c8fb0707e60b7e\"" Nov 4 05:05:40.032403 containerd[1621]: time="2025-11-04T05:05:40.032356286Z" level=info msg="StartContainer for \"905b4d23f0cefd8fa38065b19cbf4ca50347f25ee901778632c8fb0707e60b7e\"" Nov 4 05:05:40.033738 containerd[1621]: time="2025-11-04T05:05:40.033701775Z" level=info msg="connecting to shim 905b4d23f0cefd8fa38065b19cbf4ca50347f25ee901778632c8fb0707e60b7e" address="unix:///run/containerd/s/1d377364838d3cc5cf195eb8e70b9422a461da524880b07d6331a141aee30b55" protocol=ttrpc version=3 Nov 4 05:05:40.061128 systemd[1]: Started cri-containerd-905b4d23f0cefd8fa38065b19cbf4ca50347f25ee901778632c8fb0707e60b7e.scope - libcontainer container 905b4d23f0cefd8fa38065b19cbf4ca50347f25ee901778632c8fb0707e60b7e. Nov 4 05:05:40.255251 containerd[1621]: time="2025-11-04T05:05:40.255129242Z" level=info msg="StartContainer for \"905b4d23f0cefd8fa38065b19cbf4ca50347f25ee901778632c8fb0707e60b7e\" returns successfully" Nov 4 05:05:40.551413 kubelet[2773]: E1104 05:05:40.551372 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:40.590264 kubelet[2773]: E1104 05:05:40.590225 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.590264 kubelet[2773]: W1104 05:05:40.590249 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.590264 kubelet[2773]: E1104 05:05:40.590273 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.590447 kubelet[2773]: E1104 05:05:40.590435 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.590447 kubelet[2773]: W1104 05:05:40.590443 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.590493 kubelet[2773]: E1104 05:05:40.590462 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.590746 kubelet[2773]: E1104 05:05:40.590722 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.590746 kubelet[2773]: W1104 05:05:40.590736 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.590746 kubelet[2773]: E1104 05:05:40.590744 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.590976 kubelet[2773]: E1104 05:05:40.590942 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.590976 kubelet[2773]: W1104 05:05:40.590952 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.591033 kubelet[2773]: E1104 05:05:40.590977 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.591243 kubelet[2773]: E1104 05:05:40.591205 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.591243 kubelet[2773]: W1104 05:05:40.591234 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.591296 kubelet[2773]: E1104 05:05:40.591260 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.591526 kubelet[2773]: E1104 05:05:40.591503 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.591526 kubelet[2773]: W1104 05:05:40.591515 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.591526 kubelet[2773]: E1104 05:05:40.591523 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.591745 kubelet[2773]: E1104 05:05:40.591720 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.591745 kubelet[2773]: W1104 05:05:40.591733 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.591745 kubelet[2773]: E1104 05:05:40.591744 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.591947 kubelet[2773]: E1104 05:05:40.591932 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.591947 kubelet[2773]: W1104 05:05:40.591942 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.592012 kubelet[2773]: E1104 05:05:40.591952 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.592166 kubelet[2773]: E1104 05:05:40.592150 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.592166 kubelet[2773]: W1104 05:05:40.592162 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.592211 kubelet[2773]: E1104 05:05:40.592170 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.592359 kubelet[2773]: E1104 05:05:40.592345 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.592359 kubelet[2773]: W1104 05:05:40.592355 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.592407 kubelet[2773]: E1104 05:05:40.592365 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.592564 kubelet[2773]: E1104 05:05:40.592549 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.592564 kubelet[2773]: W1104 05:05:40.592560 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.592619 kubelet[2773]: E1104 05:05:40.592569 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.592786 kubelet[2773]: E1104 05:05:40.592766 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.592786 kubelet[2773]: W1104 05:05:40.592778 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.592786 kubelet[2773]: E1104 05:05:40.592786 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.593000 kubelet[2773]: E1104 05:05:40.592982 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.593000 kubelet[2773]: W1104 05:05:40.592993 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.593000 kubelet[2773]: E1104 05:05:40.593002 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.593214 kubelet[2773]: E1104 05:05:40.593195 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.593214 kubelet[2773]: W1104 05:05:40.593207 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.593261 kubelet[2773]: E1104 05:05:40.593217 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.593418 kubelet[2773]: E1104 05:05:40.593399 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.593418 kubelet[2773]: W1104 05:05:40.593410 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.593418 kubelet[2773]: E1104 05:05:40.593418 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.605727 kubelet[2773]: E1104 05:05:40.605690 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.605727 kubelet[2773]: W1104 05:05:40.605723 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.605804 kubelet[2773]: E1104 05:05:40.605740 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.605966 kubelet[2773]: E1104 05:05:40.605942 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.605966 kubelet[2773]: W1104 05:05:40.605952 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.606033 kubelet[2773]: E1104 05:05:40.605974 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.606156 kubelet[2773]: E1104 05:05:40.606141 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.606156 kubelet[2773]: W1104 05:05:40.606150 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.606210 kubelet[2773]: E1104 05:05:40.606158 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.606460 kubelet[2773]: E1104 05:05:40.606415 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.606460 kubelet[2773]: W1104 05:05:40.606431 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.606460 kubelet[2773]: E1104 05:05:40.606442 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.606653 kubelet[2773]: E1104 05:05:40.606631 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.606653 kubelet[2773]: W1104 05:05:40.606648 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.606751 kubelet[2773]: E1104 05:05:40.606661 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.606905 kubelet[2773]: E1104 05:05:40.606884 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.606905 kubelet[2773]: W1104 05:05:40.606896 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.606905 kubelet[2773]: E1104 05:05:40.606904 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.607123 kubelet[2773]: E1104 05:05:40.607110 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.607123 kubelet[2773]: W1104 05:05:40.607120 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.607183 kubelet[2773]: E1104 05:05:40.607129 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.607512 kubelet[2773]: E1104 05:05:40.607496 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.607512 kubelet[2773]: W1104 05:05:40.607510 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.607581 kubelet[2773]: E1104 05:05:40.607521 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.607768 kubelet[2773]: E1104 05:05:40.607754 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.607768 kubelet[2773]: W1104 05:05:40.607765 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.607821 kubelet[2773]: E1104 05:05:40.607775 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.607990 kubelet[2773]: E1104 05:05:40.607976 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.607990 kubelet[2773]: W1104 05:05:40.607988 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.608045 kubelet[2773]: E1104 05:05:40.607998 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.608192 kubelet[2773]: E1104 05:05:40.608180 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.608192 kubelet[2773]: W1104 05:05:40.608190 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.608235 kubelet[2773]: E1104 05:05:40.608200 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.608384 kubelet[2773]: E1104 05:05:40.608372 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.608384 kubelet[2773]: W1104 05:05:40.608382 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.608434 kubelet[2773]: E1104 05:05:40.608390 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.608600 kubelet[2773]: E1104 05:05:40.608588 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.608600 kubelet[2773]: W1104 05:05:40.608597 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.608650 kubelet[2773]: E1104 05:05:40.608605 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.608844 kubelet[2773]: E1104 05:05:40.608831 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.608844 kubelet[2773]: W1104 05:05:40.608841 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.608899 kubelet[2773]: E1104 05:05:40.608850 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.609037 kubelet[2773]: E1104 05:05:40.609025 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.609037 kubelet[2773]: W1104 05:05:40.609034 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.609092 kubelet[2773]: E1104 05:05:40.609043 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.609225 kubelet[2773]: E1104 05:05:40.609213 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.609225 kubelet[2773]: W1104 05:05:40.609222 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.609279 kubelet[2773]: E1104 05:05:40.609230 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.609561 kubelet[2773]: E1104 05:05:40.609524 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.609561 kubelet[2773]: W1104 05:05:40.609551 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.609610 kubelet[2773]: E1104 05:05:40.609571 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:40.609825 kubelet[2773]: E1104 05:05:40.609806 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 05:05:40.609825 kubelet[2773]: W1104 05:05:40.609818 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 05:05:40.609825 kubelet[2773]: E1104 05:05:40.609828 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 05:05:41.185904 containerd[1621]: time="2025-11-04T05:05:41.185811828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:41.187052 containerd[1621]: time="2025-11-04T05:05:41.187014478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 4 05:05:41.188483 containerd[1621]: time="2025-11-04T05:05:41.188413246Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:41.190868 containerd[1621]: time="2025-11-04T05:05:41.190813878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:41.191355 containerd[1621]: time="2025-11-04T05:05:41.191311393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.295221346s" Nov 4 05:05:41.191355 containerd[1621]: time="2025-11-04T05:05:41.191349385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 05:05:41.196055 containerd[1621]: time="2025-11-04T05:05:41.196000616Z" level=info msg="CreateContainer within sandbox \"fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 05:05:41.206534 containerd[1621]: time="2025-11-04T05:05:41.206474740Z" level=info msg="Container 434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:05:41.217413 containerd[1621]: time="2025-11-04T05:05:41.217355207Z" level=info msg="CreateContainer within sandbox \"fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436\"" Nov 4 05:05:41.218177 containerd[1621]: time="2025-11-04T05:05:41.218128321Z" level=info msg="StartContainer for \"434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436\"" Nov 4 05:05:41.219919 containerd[1621]: time="2025-11-04T05:05:41.219856949Z" level=info msg="connecting to shim 434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436" address="unix:///run/containerd/s/e45f9edd152473f4629417f87b7cbd305e84c2423bd7cddce4a79825150061d9" protocol=ttrpc version=3 Nov 4 05:05:41.257239 systemd[1]: Started cri-containerd-434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436.scope - libcontainer container 434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436. Nov 4 05:05:41.268450 kubelet[2773]: E1104 05:05:41.268386 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:05:41.309252 containerd[1621]: time="2025-11-04T05:05:41.309206776Z" level=info msg="StartContainer for \"434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436\" returns successfully" Nov 4 05:05:41.320460 systemd[1]: cri-containerd-434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436.scope: Deactivated successfully. Nov 4 05:05:41.320894 systemd[1]: cri-containerd-434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436.scope: Consumed 45ms CPU time, 6.5M memory peak, 4.6M written to disk. Nov 4 05:05:41.322330 containerd[1621]: time="2025-11-04T05:05:41.322278663Z" level=info msg="received exit event container_id:\"434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436\" id:\"434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436\" pid:3423 exited_at:{seconds:1762232741 nanos:321762222}" Nov 4 05:05:41.349694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-434920cd9a2257c9a192af2c57242405531ce49fa3608a8bcf008f2908f47436-rootfs.mount: Deactivated successfully. Nov 4 05:05:41.554700 kubelet[2773]: I1104 05:05:41.554644 2773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 05:05:41.555317 kubelet[2773]: E1104 05:05:41.555109 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:41.555317 kubelet[2773]: E1104 05:05:41.555109 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:41.666947 kubelet[2773]: I1104 05:05:41.666770 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-545d4c5cb4-7vbps" podStartSLOduration=2.901494946 podStartE2EDuration="8.66675311s" podCreationTimestamp="2025-11-04 05:05:33 +0000 UTC" firstStartedPulling="2025-11-04 05:05:34.130591301 +0000 UTC m=+21.047334610" lastFinishedPulling="2025-11-04 05:05:39.895849475 +0000 UTC m=+26.812592774" observedRunningTime="2025-11-04 05:05:40.585858834 +0000 UTC m=+27.502602133" watchObservedRunningTime="2025-11-04 05:05:41.66675311 +0000 UTC m=+28.583496409" Nov 4 05:05:42.560259 kubelet[2773]: E1104 05:05:42.560217 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:42.561441 containerd[1621]: time="2025-11-04T05:05:42.561371252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 05:05:43.266666 kubelet[2773]: E1104 05:05:43.266556 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:05:45.265869 kubelet[2773]: E1104 05:05:45.265810 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:05:45.718737 containerd[1621]: time="2025-11-04T05:05:45.718662937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:45.719682 containerd[1621]: time="2025-11-04T05:05:45.719627630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 4 05:05:45.720899 containerd[1621]: time="2025-11-04T05:05:45.720860956Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:45.723102 containerd[1621]: time="2025-11-04T05:05:45.723061379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:45.723558 containerd[1621]: time="2025-11-04T05:05:45.723529749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.162113813s" Nov 4 05:05:45.723591 containerd[1621]: time="2025-11-04T05:05:45.723559745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 05:05:45.727298 containerd[1621]: time="2025-11-04T05:05:45.727264003Z" level=info msg="CreateContainer within sandbox \"fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 05:05:45.737033 containerd[1621]: time="2025-11-04T05:05:45.737004539Z" level=info msg="Container 5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:05:45.745621 containerd[1621]: time="2025-11-04T05:05:45.745573925Z" level=info msg="CreateContainer within sandbox \"fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600\"" Nov 4 05:05:45.746126 containerd[1621]: time="2025-11-04T05:05:45.746096877Z" level=info msg="StartContainer for \"5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600\"" Nov 4 05:05:45.747342 containerd[1621]: time="2025-11-04T05:05:45.747314444Z" level=info msg="connecting to shim 5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600" address="unix:///run/containerd/s/e45f9edd152473f4629417f87b7cbd305e84c2423bd7cddce4a79825150061d9" protocol=ttrpc version=3 Nov 4 05:05:45.778114 systemd[1]: Started cri-containerd-5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600.scope - libcontainer container 5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600. Nov 4 05:05:45.825203 containerd[1621]: time="2025-11-04T05:05:45.825149825Z" level=info msg="StartContainer for \"5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600\" returns successfully" Nov 4 05:05:46.573236 kubelet[2773]: E1104 05:05:46.573190 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:46.927646 systemd[1]: cri-containerd-5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600.scope: Deactivated successfully. Nov 4 05:05:46.928014 systemd[1]: cri-containerd-5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600.scope: Consumed 696ms CPU time, 172.9M memory peak, 3.5M read from disk, 171.3M written to disk. Nov 4 05:05:46.959509 containerd[1621]: time="2025-11-04T05:05:46.959436228Z" level=info msg="received exit event container_id:\"5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600\" id:\"5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600\" pid:3484 exited_at:{seconds:1762232746 nanos:929191976}" Nov 4 05:05:46.987890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dff52e3884486075cddc5a047a3dad54a9693c62c9eb25b859d432dd4947600-rootfs.mount: Deactivated successfully. Nov 4 05:05:47.018592 kubelet[2773]: I1104 05:05:47.018528 2773 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 4 05:05:47.272890 systemd[1]: Created slice kubepods-besteffort-pod91e262bf_e00e_40d5_b480_4f480c906f2e.slice - libcontainer container kubepods-besteffort-pod91e262bf_e00e_40d5_b480_4f480c906f2e.slice. Nov 4 05:05:47.376307 containerd[1621]: time="2025-11-04T05:05:47.376238433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9ml2,Uid:91e262bf-e00e-40d5-b480-4f480c906f2e,Namespace:calico-system,Attempt:0,}" Nov 4 05:05:47.391512 systemd[1]: Created slice kubepods-besteffort-pod9d0eea3e_79a2_40f2_8a58_884e199c4ee3.slice - libcontainer container kubepods-besteffort-pod9d0eea3e_79a2_40f2_8a58_884e199c4ee3.slice. Nov 4 05:05:47.409020 systemd[1]: Created slice kubepods-burstable-podc7cb6ad5_0cac_4665_beee_6095b16743d4.slice - libcontainer container kubepods-burstable-podc7cb6ad5_0cac_4665_beee_6095b16743d4.slice. Nov 4 05:05:47.420369 systemd[1]: Created slice kubepods-burstable-podc909167d_9a08_4ecf_ae50_e53abffc84ba.slice - libcontainer container kubepods-burstable-podc909167d_9a08_4ecf_ae50_e53abffc84ba.slice. Nov 4 05:05:47.439139 systemd[1]: Created slice kubepods-besteffort-pod14f85a0a_9477_48d8_aa74_67ae5a309440.slice - libcontainer container kubepods-besteffort-pod14f85a0a_9477_48d8_aa74_67ae5a309440.slice. Nov 4 05:05:47.448986 systemd[1]: Created slice kubepods-besteffort-poda5f2eed7_c20b_4c5c_ba5f_390204bd1a8a.slice - libcontainer container kubepods-besteffort-poda5f2eed7_c20b_4c5c_ba5f_390204bd1a8a.slice. Nov 4 05:05:47.457031 kubelet[2773]: I1104 05:05:47.456487 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhvk9\" (UniqueName: \"kubernetes.io/projected/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-kube-api-access-zhvk9\") pod \"whisker-778c6bf6c-2hh8s\" (UID: \"9d0eea3e-79a2-40f2-8a58-884e199c4ee3\") " pod="calico-system/whisker-778c6bf6c-2hh8s" Nov 4 05:05:47.457031 kubelet[2773]: I1104 05:05:47.456546 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/14f85a0a-9477-48d8-aa74-67ae5a309440-calico-apiserver-certs\") pod \"calico-apiserver-6b7d776774-vjdtv\" (UID: \"14f85a0a-9477-48d8-aa74-67ae5a309440\") " pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" Nov 4 05:05:47.457031 kubelet[2773]: I1104 05:05:47.456566 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a34c144d-d4e5-45d0-a3e5-87f853e234f9-calico-apiserver-certs\") pod \"calico-apiserver-6864f4c9b8-bxhc8\" (UID: \"a34c144d-d4e5-45d0-a3e5-87f853e234f9\") " pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" Nov 4 05:05:47.457031 kubelet[2773]: I1104 05:05:47.456593 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a-tigera-ca-bundle\") pod \"calico-kube-controllers-699c5ddd64-52vk4\" (UID: \"a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a\") " pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" Nov 4 05:05:47.457031 kubelet[2773]: I1104 05:05:47.456607 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edca77af-e24f-4ad2-ba80-576707a67fed-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-r6ddg\" (UID: \"edca77af-e24f-4ad2-ba80-576707a67fed\") " pod="calico-system/goldmane-7c778bb748-r6ddg" Nov 4 05:05:47.457376 kubelet[2773]: I1104 05:05:47.456623 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp82j\" (UniqueName: \"kubernetes.io/projected/a34c144d-d4e5-45d0-a3e5-87f853e234f9-kube-api-access-qp82j\") pod \"calico-apiserver-6864f4c9b8-bxhc8\" (UID: \"a34c144d-d4e5-45d0-a3e5-87f853e234f9\") " pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" Nov 4 05:05:47.457376 kubelet[2773]: I1104 05:05:47.456640 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2wjq\" (UniqueName: \"kubernetes.io/projected/14f85a0a-9477-48d8-aa74-67ae5a309440-kube-api-access-z2wjq\") pod \"calico-apiserver-6b7d776774-vjdtv\" (UID: \"14f85a0a-9477-48d8-aa74-67ae5a309440\") " pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" Nov 4 05:05:47.457376 kubelet[2773]: I1104 05:05:47.456658 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7cb6ad5-0cac-4665-beee-6095b16743d4-config-volume\") pod \"coredns-66bc5c9577-crx4t\" (UID: \"c7cb6ad5-0cac-4665-beee-6095b16743d4\") " pod="kube-system/coredns-66bc5c9577-crx4t" Nov 4 05:05:47.457376 kubelet[2773]: I1104 05:05:47.456674 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gns6c\" (UniqueName: \"kubernetes.io/projected/8b185d97-46d2-4bf3-a4dc-561af0c44ee9-kube-api-access-gns6c\") pod \"calico-apiserver-6b7d776774-g6zkv\" (UID: \"8b185d97-46d2-4bf3-a4dc-561af0c44ee9\") " pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" Nov 4 05:05:47.457376 kubelet[2773]: I1104 05:05:47.456698 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-whisker-backend-key-pair\") pod \"whisker-778c6bf6c-2hh8s\" (UID: \"9d0eea3e-79a2-40f2-8a58-884e199c4ee3\") " pod="calico-system/whisker-778c6bf6c-2hh8s" Nov 4 05:05:47.457562 kubelet[2773]: I1104 05:05:47.456720 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bld99\" (UniqueName: \"kubernetes.io/projected/c7cb6ad5-0cac-4665-beee-6095b16743d4-kube-api-access-bld99\") pod \"coredns-66bc5c9577-crx4t\" (UID: \"c7cb6ad5-0cac-4665-beee-6095b16743d4\") " pod="kube-system/coredns-66bc5c9577-crx4t" Nov 4 05:05:47.457562 kubelet[2773]: I1104 05:05:47.456741 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p7lf\" (UniqueName: \"kubernetes.io/projected/c909167d-9a08-4ecf-ae50-e53abffc84ba-kube-api-access-9p7lf\") pod \"coredns-66bc5c9577-lw6sf\" (UID: \"c909167d-9a08-4ecf-ae50-e53abffc84ba\") " pod="kube-system/coredns-66bc5c9577-lw6sf" Nov 4 05:05:47.457562 kubelet[2773]: I1104 05:05:47.456764 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmvxq\" (UniqueName: \"kubernetes.io/projected/a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a-kube-api-access-bmvxq\") pod \"calico-kube-controllers-699c5ddd64-52vk4\" (UID: \"a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a\") " pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" Nov 4 05:05:47.457562 kubelet[2773]: I1104 05:05:47.456789 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c909167d-9a08-4ecf-ae50-e53abffc84ba-config-volume\") pod \"coredns-66bc5c9577-lw6sf\" (UID: \"c909167d-9a08-4ecf-ae50-e53abffc84ba\") " pod="kube-system/coredns-66bc5c9577-lw6sf" Nov 4 05:05:47.457562 kubelet[2773]: I1104 05:05:47.456810 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edca77af-e24f-4ad2-ba80-576707a67fed-config\") pod \"goldmane-7c778bb748-r6ddg\" (UID: \"edca77af-e24f-4ad2-ba80-576707a67fed\") " pod="calico-system/goldmane-7c778bb748-r6ddg" Nov 4 05:05:47.457702 kubelet[2773]: I1104 05:05:47.456834 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-whisker-ca-bundle\") pod \"whisker-778c6bf6c-2hh8s\" (UID: \"9d0eea3e-79a2-40f2-8a58-884e199c4ee3\") " pod="calico-system/whisker-778c6bf6c-2hh8s" Nov 4 05:05:47.457702 kubelet[2773]: I1104 05:05:47.456854 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b185d97-46d2-4bf3-a4dc-561af0c44ee9-calico-apiserver-certs\") pod \"calico-apiserver-6b7d776774-g6zkv\" (UID: \"8b185d97-46d2-4bf3-a4dc-561af0c44ee9\") " pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" Nov 4 05:05:47.457702 kubelet[2773]: I1104 05:05:47.456873 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/edca77af-e24f-4ad2-ba80-576707a67fed-goldmane-key-pair\") pod \"goldmane-7c778bb748-r6ddg\" (UID: \"edca77af-e24f-4ad2-ba80-576707a67fed\") " pod="calico-system/goldmane-7c778bb748-r6ddg" Nov 4 05:05:47.457702 kubelet[2773]: I1104 05:05:47.456892 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2525\" (UniqueName: \"kubernetes.io/projected/edca77af-e24f-4ad2-ba80-576707a67fed-kube-api-access-t2525\") pod \"goldmane-7c778bb748-r6ddg\" (UID: \"edca77af-e24f-4ad2-ba80-576707a67fed\") " pod="calico-system/goldmane-7c778bb748-r6ddg" Nov 4 05:05:47.460803 systemd[1]: Created slice kubepods-besteffort-poda34c144d_d4e5_45d0_a3e5_87f853e234f9.slice - libcontainer container kubepods-besteffort-poda34c144d_d4e5_45d0_a3e5_87f853e234f9.slice. Nov 4 05:05:47.472672 systemd[1]: Created slice kubepods-besteffort-pod8b185d97_46d2_4bf3_a4dc_561af0c44ee9.slice - libcontainer container kubepods-besteffort-pod8b185d97_46d2_4bf3_a4dc_561af0c44ee9.slice. Nov 4 05:05:47.481060 systemd[1]: Created slice kubepods-besteffort-podedca77af_e24f_4ad2_ba80_576707a67fed.slice - libcontainer container kubepods-besteffort-podedca77af_e24f_4ad2_ba80_576707a67fed.slice. Nov 4 05:05:47.555563 containerd[1621]: time="2025-11-04T05:05:47.555476801Z" level=error msg="Failed to destroy network for sandbox \"d2e3fa89ed56f2bd180a3214566d2afb81928c900e56bb8a6c00130de360e1f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.558452 containerd[1621]: time="2025-11-04T05:05:47.558385904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9ml2,Uid:91e262bf-e00e-40d5-b480-4f480c906f2e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e3fa89ed56f2bd180a3214566d2afb81928c900e56bb8a6c00130de360e1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.559262 systemd[1]: run-netns-cni\x2dcf062419\x2d8a41\x2d65be\x2deefb\x2d00bf7351bc4b.mount: Deactivated successfully. Nov 4 05:05:47.593056 kubelet[2773]: E1104 05:05:47.592452 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e3fa89ed56f2bd180a3214566d2afb81928c900e56bb8a6c00130de360e1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.593056 kubelet[2773]: E1104 05:05:47.592520 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e3fa89ed56f2bd180a3214566d2afb81928c900e56bb8a6c00130de360e1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m9ml2" Nov 4 05:05:47.593056 kubelet[2773]: E1104 05:05:47.592937 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e3fa89ed56f2bd180a3214566d2afb81928c900e56bb8a6c00130de360e1f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m9ml2" Nov 4 05:05:47.594095 kubelet[2773]: E1104 05:05:47.594050 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m9ml2_calico-system(91e262bf-e00e-40d5-b480-4f480c906f2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m9ml2_calico-system(91e262bf-e00e-40d5-b480-4f480c906f2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2e3fa89ed56f2bd180a3214566d2afb81928c900e56bb8a6c00130de360e1f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:05:47.600377 kubelet[2773]: E1104 05:05:47.600323 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:47.601296 containerd[1621]: time="2025-11-04T05:05:47.601241875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 05:05:47.706017 containerd[1621]: time="2025-11-04T05:05:47.705915314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778c6bf6c-2hh8s,Uid:9d0eea3e-79a2-40f2-8a58-884e199c4ee3,Namespace:calico-system,Attempt:0,}" Nov 4 05:05:47.717253 kubelet[2773]: E1104 05:05:47.717204 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:47.717991 containerd[1621]: time="2025-11-04T05:05:47.717900703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-crx4t,Uid:c7cb6ad5-0cac-4665-beee-6095b16743d4,Namespace:kube-system,Attempt:0,}" Nov 4 05:05:47.734381 kubelet[2773]: E1104 05:05:47.734317 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:47.735336 containerd[1621]: time="2025-11-04T05:05:47.735263459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lw6sf,Uid:c909167d-9a08-4ecf-ae50-e53abffc84ba,Namespace:kube-system,Attempt:0,}" Nov 4 05:05:47.747680 containerd[1621]: time="2025-11-04T05:05:47.747606618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d776774-vjdtv,Uid:14f85a0a-9477-48d8-aa74-67ae5a309440,Namespace:calico-apiserver,Attempt:0,}" Nov 4 05:05:47.757117 containerd[1621]: time="2025-11-04T05:05:47.756988356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699c5ddd64-52vk4,Uid:a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a,Namespace:calico-system,Attempt:0,}" Nov 4 05:05:47.771250 containerd[1621]: time="2025-11-04T05:05:47.771195646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864f4c9b8-bxhc8,Uid:a34c144d-d4e5-45d0-a3e5-87f853e234f9,Namespace:calico-apiserver,Attempt:0,}" Nov 4 05:05:47.781641 containerd[1621]: time="2025-11-04T05:05:47.781192851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d776774-g6zkv,Uid:8b185d97-46d2-4bf3-a4dc-561af0c44ee9,Namespace:calico-apiserver,Attempt:0,}" Nov 4 05:05:47.791900 containerd[1621]: time="2025-11-04T05:05:47.791857578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-r6ddg,Uid:edca77af-e24f-4ad2-ba80-576707a67fed,Namespace:calico-system,Attempt:0,}" Nov 4 05:05:47.798457 containerd[1621]: time="2025-11-04T05:05:47.798426865Z" level=error msg="Failed to destroy network for sandbox \"f4054867204c19ec22eeb679330354bb70e58d920bcb3077231a03b47e9501c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.816069 containerd[1621]: time="2025-11-04T05:05:47.813991745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778c6bf6c-2hh8s,Uid:9d0eea3e-79a2-40f2-8a58-884e199c4ee3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4054867204c19ec22eeb679330354bb70e58d920bcb3077231a03b47e9501c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.816210 kubelet[2773]: E1104 05:05:47.814333 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4054867204c19ec22eeb679330354bb70e58d920bcb3077231a03b47e9501c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.816210 kubelet[2773]: E1104 05:05:47.814405 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4054867204c19ec22eeb679330354bb70e58d920bcb3077231a03b47e9501c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778c6bf6c-2hh8s" Nov 4 05:05:47.816210 kubelet[2773]: E1104 05:05:47.814430 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4054867204c19ec22eeb679330354bb70e58d920bcb3077231a03b47e9501c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778c6bf6c-2hh8s" Nov 4 05:05:47.816326 kubelet[2773]: E1104 05:05:47.814500 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-778c6bf6c-2hh8s_calico-system(9d0eea3e-79a2-40f2-8a58-884e199c4ee3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-778c6bf6c-2hh8s_calico-system(9d0eea3e-79a2-40f2-8a58-884e199c4ee3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4054867204c19ec22eeb679330354bb70e58d920bcb3077231a03b47e9501c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-778c6bf6c-2hh8s" podUID="9d0eea3e-79a2-40f2-8a58-884e199c4ee3" Nov 4 05:05:47.832586 containerd[1621]: time="2025-11-04T05:05:47.832401367Z" level=error msg="Failed to destroy network for sandbox \"f21f1119e0e494b6f99506c249cd59ea9a35efbbac9cfe125773cf5b824d1fd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.869378 containerd[1621]: time="2025-11-04T05:05:47.838524886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-crx4t,Uid:c7cb6ad5-0cac-4665-beee-6095b16743d4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21f1119e0e494b6f99506c249cd59ea9a35efbbac9cfe125773cf5b824d1fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.871812 kubelet[2773]: E1104 05:05:47.871597 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21f1119e0e494b6f99506c249cd59ea9a35efbbac9cfe125773cf5b824d1fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.871812 kubelet[2773]: E1104 05:05:47.871663 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21f1119e0e494b6f99506c249cd59ea9a35efbbac9cfe125773cf5b824d1fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-crx4t" Nov 4 05:05:47.871812 kubelet[2773]: E1104 05:05:47.871684 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21f1119e0e494b6f99506c249cd59ea9a35efbbac9cfe125773cf5b824d1fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-crx4t" Nov 4 05:05:47.872018 kubelet[2773]: E1104 05:05:47.871748 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-crx4t_kube-system(c7cb6ad5-0cac-4665-beee-6095b16743d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-crx4t_kube-system(c7cb6ad5-0cac-4665-beee-6095b16743d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f21f1119e0e494b6f99506c249cd59ea9a35efbbac9cfe125773cf5b824d1fd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-crx4t" podUID="c7cb6ad5-0cac-4665-beee-6095b16743d4" Nov 4 05:05:47.875254 containerd[1621]: time="2025-11-04T05:05:47.870246798Z" level=error msg="Failed to destroy network for sandbox \"22a9af7daf4bf3c17b665672c1314c976b7f5c030038a66c7499f475540bda60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.880727 containerd[1621]: time="2025-11-04T05:05:47.880655174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lw6sf,Uid:c909167d-9a08-4ecf-ae50-e53abffc84ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a9af7daf4bf3c17b665672c1314c976b7f5c030038a66c7499f475540bda60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.882059 kubelet[2773]: E1104 05:05:47.882008 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a9af7daf4bf3c17b665672c1314c976b7f5c030038a66c7499f475540bda60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.882151 kubelet[2773]: E1104 05:05:47.882091 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a9af7daf4bf3c17b665672c1314c976b7f5c030038a66c7499f475540bda60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lw6sf" Nov 4 05:05:47.882151 kubelet[2773]: E1104 05:05:47.882123 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22a9af7daf4bf3c17b665672c1314c976b7f5c030038a66c7499f475540bda60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lw6sf" Nov 4 05:05:47.883870 kubelet[2773]: E1104 05:05:47.882221 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lw6sf_kube-system(c909167d-9a08-4ecf-ae50-e53abffc84ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lw6sf_kube-system(c909167d-9a08-4ecf-ae50-e53abffc84ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22a9af7daf4bf3c17b665672c1314c976b7f5c030038a66c7499f475540bda60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lw6sf" podUID="c909167d-9a08-4ecf-ae50-e53abffc84ba" Nov 4 05:05:47.887177 containerd[1621]: time="2025-11-04T05:05:47.887125786Z" level=error msg="Failed to destroy network for sandbox \"1bb5b28e01bf69b9f9b4e51cb5f179fb82fb796ae7e72f54b50315578481a778\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.890735 containerd[1621]: time="2025-11-04T05:05:47.890640756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d776774-vjdtv,Uid:14f85a0a-9477-48d8-aa74-67ae5a309440,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bb5b28e01bf69b9f9b4e51cb5f179fb82fb796ae7e72f54b50315578481a778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.891270 kubelet[2773]: E1104 05:05:47.891196 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bb5b28e01bf69b9f9b4e51cb5f179fb82fb796ae7e72f54b50315578481a778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.891367 kubelet[2773]: E1104 05:05:47.891339 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bb5b28e01bf69b9f9b4e51cb5f179fb82fb796ae7e72f54b50315578481a778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" Nov 4 05:05:47.891424 kubelet[2773]: E1104 05:05:47.891377 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bb5b28e01bf69b9f9b4e51cb5f179fb82fb796ae7e72f54b50315578481a778\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" Nov 4 05:05:47.891580 kubelet[2773]: E1104 05:05:47.891511 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b7d776774-vjdtv_calico-apiserver(14f85a0a-9477-48d8-aa74-67ae5a309440)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b7d776774-vjdtv_calico-apiserver(14f85a0a-9477-48d8-aa74-67ae5a309440)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bb5b28e01bf69b9f9b4e51cb5f179fb82fb796ae7e72f54b50315578481a778\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" podUID="14f85a0a-9477-48d8-aa74-67ae5a309440" Nov 4 05:05:47.928824 containerd[1621]: time="2025-11-04T05:05:47.928762717Z" level=error msg="Failed to destroy network for sandbox \"865edcae37d1e96308990cd0cdf8963ab5404a1b9f56a4e29c6c2a7281fdaa05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.933837 containerd[1621]: time="2025-11-04T05:05:47.933782634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864f4c9b8-bxhc8,Uid:a34c144d-d4e5-45d0-a3e5-87f853e234f9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"865edcae37d1e96308990cd0cdf8963ab5404a1b9f56a4e29c6c2a7281fdaa05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.934385 kubelet[2773]: E1104 05:05:47.934340 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865edcae37d1e96308990cd0cdf8963ab5404a1b9f56a4e29c6c2a7281fdaa05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.934484 kubelet[2773]: E1104 05:05:47.934416 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865edcae37d1e96308990cd0cdf8963ab5404a1b9f56a4e29c6c2a7281fdaa05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" Nov 4 05:05:47.934484 kubelet[2773]: E1104 05:05:47.934443 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"865edcae37d1e96308990cd0cdf8963ab5404a1b9f56a4e29c6c2a7281fdaa05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" Nov 4 05:05:47.934559 kubelet[2773]: E1104 05:05:47.934533 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6864f4c9b8-bxhc8_calico-apiserver(a34c144d-d4e5-45d0-a3e5-87f853e234f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6864f4c9b8-bxhc8_calico-apiserver(a34c144d-d4e5-45d0-a3e5-87f853e234f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"865edcae37d1e96308990cd0cdf8963ab5404a1b9f56a4e29c6c2a7281fdaa05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" podUID="a34c144d-d4e5-45d0-a3e5-87f853e234f9" Nov 4 05:05:47.938433 containerd[1621]: time="2025-11-04T05:05:47.938303884Z" level=error msg="Failed to destroy network for sandbox \"ae344e8243ee1a4ddad37c63c3cef1fe68cb1130431f11163aedf278369c34a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.942807 containerd[1621]: time="2025-11-04T05:05:47.942736679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699c5ddd64-52vk4,Uid:a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae344e8243ee1a4ddad37c63c3cef1fe68cb1130431f11163aedf278369c34a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.943332 kubelet[2773]: E1104 05:05:47.943288 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae344e8243ee1a4ddad37c63c3cef1fe68cb1130431f11163aedf278369c34a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.943462 kubelet[2773]: E1104 05:05:47.943437 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae344e8243ee1a4ddad37c63c3cef1fe68cb1130431f11163aedf278369c34a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" Nov 4 05:05:47.943550 kubelet[2773]: E1104 05:05:47.943527 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae344e8243ee1a4ddad37c63c3cef1fe68cb1130431f11163aedf278369c34a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" Nov 4 05:05:47.943759 kubelet[2773]: E1104 05:05:47.943705 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-699c5ddd64-52vk4_calico-system(a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-699c5ddd64-52vk4_calico-system(a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae344e8243ee1a4ddad37c63c3cef1fe68cb1130431f11163aedf278369c34a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" podUID="a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a" Nov 4 05:05:47.950997 containerd[1621]: time="2025-11-04T05:05:47.950270728Z" level=error msg="Failed to destroy network for sandbox \"e97ad674be47eba907db17a1b47e21fde377e210f2a65a9385f4249c4b33efa4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.953125 containerd[1621]: time="2025-11-04T05:05:47.953065316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-r6ddg,Uid:edca77af-e24f-4ad2-ba80-576707a67fed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97ad674be47eba907db17a1b47e21fde377e210f2a65a9385f4249c4b33efa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.953389 kubelet[2773]: E1104 05:05:47.953348 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97ad674be47eba907db17a1b47e21fde377e210f2a65a9385f4249c4b33efa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.953456 kubelet[2773]: E1104 05:05:47.953413 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97ad674be47eba907db17a1b47e21fde377e210f2a65a9385f4249c4b33efa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-r6ddg" Nov 4 05:05:47.953456 kubelet[2773]: E1104 05:05:47.953437 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e97ad674be47eba907db17a1b47e21fde377e210f2a65a9385f4249c4b33efa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-r6ddg" Nov 4 05:05:47.953544 kubelet[2773]: E1104 05:05:47.953510 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-r6ddg_calico-system(edca77af-e24f-4ad2-ba80-576707a67fed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-r6ddg_calico-system(edca77af-e24f-4ad2-ba80-576707a67fed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e97ad674be47eba907db17a1b47e21fde377e210f2a65a9385f4249c4b33efa4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-r6ddg" podUID="edca77af-e24f-4ad2-ba80-576707a67fed" Nov 4 05:05:47.959390 containerd[1621]: time="2025-11-04T05:05:47.959331603Z" level=error msg="Failed to destroy network for sandbox \"67c1bd47cb3fa90f13f8f355f89ac1a5bcdaf8515d2240fbcb602892c5d09a0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.962457 containerd[1621]: time="2025-11-04T05:05:47.962387512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d776774-g6zkv,Uid:8b185d97-46d2-4bf3-a4dc-561af0c44ee9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c1bd47cb3fa90f13f8f355f89ac1a5bcdaf8515d2240fbcb602892c5d09a0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.964093 kubelet[2773]: E1104 05:05:47.964052 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c1bd47cb3fa90f13f8f355f89ac1a5bcdaf8515d2240fbcb602892c5d09a0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 05:05:47.964181 kubelet[2773]: E1104 05:05:47.964117 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c1bd47cb3fa90f13f8f355f89ac1a5bcdaf8515d2240fbcb602892c5d09a0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" Nov 4 05:05:47.964181 kubelet[2773]: E1104 05:05:47.964138 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c1bd47cb3fa90f13f8f355f89ac1a5bcdaf8515d2240fbcb602892c5d09a0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" Nov 4 05:05:47.964233 kubelet[2773]: E1104 05:05:47.964197 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b7d776774-g6zkv_calico-apiserver(8b185d97-46d2-4bf3-a4dc-561af0c44ee9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b7d776774-g6zkv_calico-apiserver(8b185d97-46d2-4bf3-a4dc-561af0c44ee9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67c1bd47cb3fa90f13f8f355f89ac1a5bcdaf8515d2240fbcb602892c5d09a0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" podUID="8b185d97-46d2-4bf3-a4dc-561af0c44ee9" Nov 4 05:05:48.403296 kubelet[2773]: I1104 05:05:48.403221 2773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 05:05:48.403733 kubelet[2773]: E1104 05:05:48.403709 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:48.602352 kubelet[2773]: E1104 05:05:48.602308 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:54.985859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount937305255.mount: Deactivated successfully. Nov 4 05:05:56.787434 containerd[1621]: time="2025-11-04T05:05:56.787329835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:56.788673 containerd[1621]: time="2025-11-04T05:05:56.788617572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 4 05:05:56.793761 containerd[1621]: time="2025-11-04T05:05:56.793721060Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:56.796140 containerd[1621]: time="2025-11-04T05:05:56.796095957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 05:05:56.796723 containerd[1621]: time="2025-11-04T05:05:56.796691404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.19539676s" Nov 4 05:05:56.796779 containerd[1621]: time="2025-11-04T05:05:56.796730989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 05:05:56.823584 containerd[1621]: time="2025-11-04T05:05:56.823513019Z" level=info msg="CreateContainer within sandbox \"fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 05:05:56.840768 containerd[1621]: time="2025-11-04T05:05:56.840706495Z" level=info msg="Container 6a1d5ab51731f1c287f0872cb6cee255d88cf6ed85f5f3ece6182c50aef4bb88: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:05:56.856821 containerd[1621]: time="2025-11-04T05:05:56.856738201Z" level=info msg="CreateContainer within sandbox \"fcbdeacf6b90b811e06334890a4e8eafbe761c96c2bc8f71d9c9119bd3c4edd7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6a1d5ab51731f1c287f0872cb6cee255d88cf6ed85f5f3ece6182c50aef4bb88\"" Nov 4 05:05:56.858758 containerd[1621]: time="2025-11-04T05:05:56.858707196Z" level=info msg="StartContainer for \"6a1d5ab51731f1c287f0872cb6cee255d88cf6ed85f5f3ece6182c50aef4bb88\"" Nov 4 05:05:56.860927 containerd[1621]: time="2025-11-04T05:05:56.860880253Z" level=info msg="connecting to shim 6a1d5ab51731f1c287f0872cb6cee255d88cf6ed85f5f3ece6182c50aef4bb88" address="unix:///run/containerd/s/e45f9edd152473f4629417f87b7cbd305e84c2423bd7cddce4a79825150061d9" protocol=ttrpc version=3 Nov 4 05:05:56.890137 systemd[1]: Started cri-containerd-6a1d5ab51731f1c287f0872cb6cee255d88cf6ed85f5f3ece6182c50aef4bb88.scope - libcontainer container 6a1d5ab51731f1c287f0872cb6cee255d88cf6ed85f5f3ece6182c50aef4bb88. Nov 4 05:05:57.028596 containerd[1621]: time="2025-11-04T05:05:57.028540358Z" level=info msg="StartContainer for \"6a1d5ab51731f1c287f0872cb6cee255d88cf6ed85f5f3ece6182c50aef4bb88\" returns successfully" Nov 4 05:05:57.123045 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 05:05:57.124306 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 05:05:57.336274 kubelet[2773]: I1104 05:05:57.336225 2773 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-whisker-ca-bundle\") pod \"9d0eea3e-79a2-40f2-8a58-884e199c4ee3\" (UID: \"9d0eea3e-79a2-40f2-8a58-884e199c4ee3\") " Nov 4 05:05:57.336274 kubelet[2773]: I1104 05:05:57.336277 2773 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-whisker-backend-key-pair\") pod \"9d0eea3e-79a2-40f2-8a58-884e199c4ee3\" (UID: \"9d0eea3e-79a2-40f2-8a58-884e199c4ee3\") " Nov 4 05:05:57.336849 kubelet[2773]: I1104 05:05:57.336311 2773 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhvk9\" (UniqueName: \"kubernetes.io/projected/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-kube-api-access-zhvk9\") pod \"9d0eea3e-79a2-40f2-8a58-884e199c4ee3\" (UID: \"9d0eea3e-79a2-40f2-8a58-884e199c4ee3\") " Nov 4 05:05:57.337481 kubelet[2773]: I1104 05:05:57.337411 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9d0eea3e-79a2-40f2-8a58-884e199c4ee3" (UID: "9d0eea3e-79a2-40f2-8a58-884e199c4ee3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 05:05:57.340878 kubelet[2773]: I1104 05:05:57.340818 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-kube-api-access-zhvk9" (OuterVolumeSpecName: "kube-api-access-zhvk9") pod "9d0eea3e-79a2-40f2-8a58-884e199c4ee3" (UID: "9d0eea3e-79a2-40f2-8a58-884e199c4ee3"). InnerVolumeSpecName "kube-api-access-zhvk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 05:05:57.341852 kubelet[2773]: I1104 05:05:57.341788 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9d0eea3e-79a2-40f2-8a58-884e199c4ee3" (UID: "9d0eea3e-79a2-40f2-8a58-884e199c4ee3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 05:05:57.437263 kubelet[2773]: I1104 05:05:57.437190 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhvk9\" (UniqueName: \"kubernetes.io/projected/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-kube-api-access-zhvk9\") on node \"localhost\" DevicePath \"\"" Nov 4 05:05:57.437263 kubelet[2773]: I1104 05:05:57.437238 2773 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 4 05:05:57.437263 kubelet[2773]: I1104 05:05:57.437249 2773 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9d0eea3e-79a2-40f2-8a58-884e199c4ee3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 4 05:05:57.629071 kubelet[2773]: E1104 05:05:57.628822 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:57.635872 systemd[1]: Removed slice kubepods-besteffort-pod9d0eea3e_79a2_40f2_8a58_884e199c4ee3.slice - libcontainer container kubepods-besteffort-pod9d0eea3e_79a2_40f2_8a58_884e199c4ee3.slice. Nov 4 05:05:57.645718 kubelet[2773]: I1104 05:05:57.645643 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w5r5r" podStartSLOduration=2.068515434 podStartE2EDuration="24.645620393s" podCreationTimestamp="2025-11-04 05:05:33 +0000 UTC" firstStartedPulling="2025-11-04 05:05:34.220421733 +0000 UTC m=+21.137165022" lastFinishedPulling="2025-11-04 05:05:56.797526682 +0000 UTC m=+43.714269981" observedRunningTime="2025-11-04 05:05:57.645181289 +0000 UTC m=+44.561924588" watchObservedRunningTime="2025-11-04 05:05:57.645620393 +0000 UTC m=+44.562363692" Nov 4 05:05:57.704381 systemd[1]: Created slice kubepods-besteffort-pod938996c3_4ddf_4544_932d_7cc7b7f765d9.slice - libcontainer container kubepods-besteffort-pod938996c3_4ddf_4544_932d_7cc7b7f765d9.slice. Nov 4 05:05:57.739665 kubelet[2773]: I1104 05:05:57.739611 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/938996c3-4ddf-4544-932d-7cc7b7f765d9-whisker-ca-bundle\") pod \"whisker-d98494775-rq6s9\" (UID: \"938996c3-4ddf-4544-932d-7cc7b7f765d9\") " pod="calico-system/whisker-d98494775-rq6s9" Nov 4 05:05:57.739845 kubelet[2773]: I1104 05:05:57.739696 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/938996c3-4ddf-4544-932d-7cc7b7f765d9-whisker-backend-key-pair\") pod \"whisker-d98494775-rq6s9\" (UID: \"938996c3-4ddf-4544-932d-7cc7b7f765d9\") " pod="calico-system/whisker-d98494775-rq6s9" Nov 4 05:05:57.739845 kubelet[2773]: I1104 05:05:57.739751 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdtd2\" (UniqueName: \"kubernetes.io/projected/938996c3-4ddf-4544-932d-7cc7b7f765d9-kube-api-access-mdtd2\") pod \"whisker-d98494775-rq6s9\" (UID: \"938996c3-4ddf-4544-932d-7cc7b7f765d9\") " pod="calico-system/whisker-d98494775-rq6s9" Nov 4 05:05:57.804326 systemd[1]: var-lib-kubelet-pods-9d0eea3e\x2d79a2\x2d40f2\x2d8a58\x2d884e199c4ee3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzhvk9.mount: Deactivated successfully. Nov 4 05:05:57.804475 systemd[1]: var-lib-kubelet-pods-9d0eea3e\x2d79a2\x2d40f2\x2d8a58\x2d884e199c4ee3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 05:05:58.011478 containerd[1621]: time="2025-11-04T05:05:58.011316264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d98494775-rq6s9,Uid:938996c3-4ddf-4544-932d-7cc7b7f765d9,Namespace:calico-system,Attempt:0,}" Nov 4 05:05:58.200737 systemd-networkd[1525]: calif2f5e696ef4: Link UP Nov 4 05:05:58.201585 systemd-networkd[1525]: calif2f5e696ef4: Gained carrier Nov 4 05:05:58.222168 containerd[1621]: 2025-11-04 05:05:58.042 [INFO][3903] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 05:05:58.222168 containerd[1621]: 2025-11-04 05:05:58.070 [INFO][3903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--d98494775--rq6s9-eth0 whisker-d98494775- calico-system 938996c3-4ddf-4544-932d-7cc7b7f765d9 943 0 2025-11-04 05:05:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d98494775 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-d98494775-rq6s9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif2f5e696ef4 [] [] }} ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Namespace="calico-system" Pod="whisker-d98494775-rq6s9" WorkloadEndpoint="localhost-k8s-whisker--d98494775--rq6s9-" Nov 4 05:05:58.222168 containerd[1621]: 2025-11-04 05:05:58.071 [INFO][3903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Namespace="calico-system" Pod="whisker-d98494775-rq6s9" WorkloadEndpoint="localhost-k8s-whisker--d98494775--rq6s9-eth0" Nov 4 05:05:58.222168 containerd[1621]: 2025-11-04 05:05:58.148 [INFO][3918] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" HandleID="k8s-pod-network.3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Workload="localhost-k8s-whisker--d98494775--rq6s9-eth0" Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.149 [INFO][3918] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" HandleID="k8s-pod-network.3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Workload="localhost-k8s-whisker--d98494775--rq6s9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ea090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-d98494775-rq6s9", "timestamp":"2025-11-04 05:05:58.148235946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.149 [INFO][3918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.149 [INFO][3918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.150 [INFO][3918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.158 [INFO][3918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" host="localhost" Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.166 [INFO][3918] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.172 [INFO][3918] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.174 [INFO][3918] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.176 [INFO][3918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 05:05:58.222447 containerd[1621]: 2025-11-04 05:05:58.176 [INFO][3918] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" host="localhost" Nov 4 05:05:58.222667 containerd[1621]: 2025-11-04 05:05:58.178 [INFO][3918] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4 Nov 4 05:05:58.222667 containerd[1621]: 2025-11-04 05:05:58.181 [INFO][3918] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" host="localhost" Nov 4 05:05:58.222667 containerd[1621]: 2025-11-04 05:05:58.188 [INFO][3918] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" host="localhost" Nov 4 05:05:58.222667 containerd[1621]: 2025-11-04 05:05:58.188 [INFO][3918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" host="localhost" Nov 4 05:05:58.222667 containerd[1621]: 2025-11-04 05:05:58.188 [INFO][3918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 05:05:58.222667 containerd[1621]: 2025-11-04 05:05:58.188 [INFO][3918] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" HandleID="k8s-pod-network.3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Workload="localhost-k8s-whisker--d98494775--rq6s9-eth0" Nov 4 05:05:58.222788 containerd[1621]: 2025-11-04 05:05:58.193 [INFO][3903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Namespace="calico-system" Pod="whisker-d98494775-rq6s9" WorkloadEndpoint="localhost-k8s-whisker--d98494775--rq6s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d98494775--rq6s9-eth0", GenerateName:"whisker-d98494775-", Namespace:"calico-system", SelfLink:"", UID:"938996c3-4ddf-4544-932d-7cc7b7f765d9", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d98494775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-d98494775-rq6s9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif2f5e696ef4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:05:58.222788 containerd[1621]: 2025-11-04 05:05:58.193 [INFO][3903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Namespace="calico-system" Pod="whisker-d98494775-rq6s9" WorkloadEndpoint="localhost-k8s-whisker--d98494775--rq6s9-eth0" Nov 4 05:05:58.222891 containerd[1621]: 2025-11-04 05:05:58.193 [INFO][3903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2f5e696ef4 ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Namespace="calico-system" Pod="whisker-d98494775-rq6s9" WorkloadEndpoint="localhost-k8s-whisker--d98494775--rq6s9-eth0" Nov 4 05:05:58.222891 containerd[1621]: 2025-11-04 05:05:58.201 [INFO][3903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Namespace="calico-system" Pod="whisker-d98494775-rq6s9" WorkloadEndpoint="localhost-k8s-whisker--d98494775--rq6s9-eth0" Nov 4 05:05:58.222935 containerd[1621]: 2025-11-04 05:05:58.201 [INFO][3903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Namespace="calico-system" Pod="whisker-d98494775-rq6s9" WorkloadEndpoint="localhost-k8s-whisker--d98494775--rq6s9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d98494775--rq6s9-eth0", GenerateName:"whisker-d98494775-", Namespace:"calico-system", SelfLink:"", UID:"938996c3-4ddf-4544-932d-7cc7b7f765d9", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d98494775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4", Pod:"whisker-d98494775-rq6s9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif2f5e696ef4", MAC:"56:38:5e:f4:8e:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:05:58.223032 containerd[1621]: 2025-11-04 05:05:58.217 [INFO][3903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" Namespace="calico-system" Pod="whisker-d98494775-rq6s9" WorkloadEndpoint="localhost-k8s-whisker--d98494775--rq6s9-eth0" Nov 4 05:05:58.585444 containerd[1621]: time="2025-11-04T05:05:58.585371626Z" level=info msg="connecting to shim 3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4" address="unix:///run/containerd/s/1e98d4f30af445caae72c6917133dff207518da479de2362cfda64bb1b43adc6" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:05:58.631728 kubelet[2773]: I1104 05:05:58.631670 2773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 05:05:58.632399 kubelet[2773]: E1104 05:05:58.632307 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:58.644435 systemd[1]: Started cri-containerd-3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4.scope - libcontainer container 3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4. Nov 4 05:05:58.686325 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 05:05:58.781331 containerd[1621]: time="2025-11-04T05:05:58.781271691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d98494775-rq6s9,Uid:938996c3-4ddf-4544-932d-7cc7b7f765d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"3700319f2327b355f5e4403745500a16e0f66aa1b2cf4d728cdc852186ba48e4\"" Nov 4 05:05:58.792144 containerd[1621]: time="2025-11-04T05:05:58.792083289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 05:05:59.108177 containerd[1621]: time="2025-11-04T05:05:59.108127438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:05:59.110263 containerd[1621]: time="2025-11-04T05:05:59.110121689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 05:05:59.110263 containerd[1621]: time="2025-11-04T05:05:59.110224792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 05:05:59.110589 kubelet[2773]: E1104 05:05:59.110483 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 05:05:59.110589 kubelet[2773]: E1104 05:05:59.110560 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 05:05:59.110810 kubelet[2773]: E1104 05:05:59.110773 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-d98494775-rq6s9_calico-system(938996c3-4ddf-4544-932d-7cc7b7f765d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 05:05:59.113296 containerd[1621]: time="2025-11-04T05:05:59.112999649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 05:05:59.181449 systemd-networkd[1525]: vxlan.calico: Link UP Nov 4 05:05:59.181462 systemd-networkd[1525]: vxlan.calico: Gained carrier Nov 4 05:05:59.270503 kubelet[2773]: I1104 05:05:59.270372 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0eea3e-79a2-40f2-8a58-884e199c4ee3" path="/var/lib/kubelet/pods/9d0eea3e-79a2-40f2-8a58-884e199c4ee3/volumes" Nov 4 05:05:59.273202 containerd[1621]: time="2025-11-04T05:05:59.273133697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d776774-g6zkv,Uid:8b185d97-46d2-4bf3-a4dc-561af0c44ee9,Namespace:calico-apiserver,Attempt:0,}" Nov 4 05:05:59.415106 systemd-networkd[1525]: cali8c792607037: Link UP Nov 4 05:05:59.415328 systemd-networkd[1525]: cali8c792607037: Gained carrier Nov 4 05:05:59.432391 containerd[1621]: 2025-11-04 05:05:59.331 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0 calico-apiserver-6b7d776774- calico-apiserver 8b185d97-46d2-4bf3-a4dc-561af0c44ee9 859 0 2025-11-04 05:05:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b7d776774 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b7d776774-g6zkv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8c792607037 [] [] }} ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-g6zkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-" Nov 4 05:05:59.432391 containerd[1621]: 2025-11-04 05:05:59.332 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-g6zkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" Nov 4 05:05:59.432391 containerd[1621]: 2025-11-04 05:05:59.366 [INFO][4209] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" HandleID="k8s-pod-network.717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Workload="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.366 [INFO][4209] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" HandleID="k8s-pod-network.717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Workload="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011b3f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b7d776774-g6zkv", "timestamp":"2025-11-04 05:05:59.366396971 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.366 [INFO][4209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.366 [INFO][4209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.366 [INFO][4209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.375 [INFO][4209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" host="localhost" Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.381 [INFO][4209] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.386 [INFO][4209] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.388 [INFO][4209] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.390 [INFO][4209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 05:05:59.432595 containerd[1621]: 2025-11-04 05:05:59.390 [INFO][4209] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" host="localhost" Nov 4 05:05:59.432830 containerd[1621]: 2025-11-04 05:05:59.392 [INFO][4209] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4 Nov 4 05:05:59.432830 containerd[1621]: 2025-11-04 05:05:59.399 [INFO][4209] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" host="localhost" Nov 4 05:05:59.432830 containerd[1621]: 2025-11-04 05:05:59.405 [INFO][4209] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" host="localhost" Nov 4 05:05:59.432830 containerd[1621]: 2025-11-04 05:05:59.405 [INFO][4209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" host="localhost" Nov 4 05:05:59.432830 containerd[1621]: 2025-11-04 05:05:59.405 [INFO][4209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 05:05:59.432830 containerd[1621]: 2025-11-04 05:05:59.405 [INFO][4209] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" HandleID="k8s-pod-network.717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Workload="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" Nov 4 05:05:59.432975 containerd[1621]: 2025-11-04 05:05:59.410 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-g6zkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0", GenerateName:"calico-apiserver-6b7d776774-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b185d97-46d2-4bf3-a4dc-561af0c44ee9", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7d776774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b7d776774-g6zkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c792607037", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:05:59.433035 containerd[1621]: 2025-11-04 05:05:59.410 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-g6zkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" Nov 4 05:05:59.433035 containerd[1621]: 2025-11-04 05:05:59.410 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c792607037 ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-g6zkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" Nov 4 05:05:59.433035 containerd[1621]: 2025-11-04 05:05:59.414 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-g6zkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" Nov 4 05:05:59.433110 containerd[1621]: 2025-11-04 05:05:59.415 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-g6zkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0", GenerateName:"calico-apiserver-6b7d776774-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b185d97-46d2-4bf3-a4dc-561af0c44ee9", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7d776774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4", Pod:"calico-apiserver-6b7d776774-g6zkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c792607037", MAC:"be:ee:6f:34:48:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:05:59.433188 containerd[1621]: 2025-11-04 05:05:59.426 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-g6zkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--g6zkv-eth0" Nov 4 05:05:59.441123 containerd[1621]: time="2025-11-04T05:05:59.440941724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:05:59.444091 containerd[1621]: time="2025-11-04T05:05:59.444050758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 05:05:59.444291 containerd[1621]: time="2025-11-04T05:05:59.444211049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 05:05:59.444539 kubelet[2773]: E1104 05:05:59.444320 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 05:05:59.444539 kubelet[2773]: E1104 05:05:59.444384 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 05:05:59.444539 kubelet[2773]: E1104 05:05:59.444487 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-d98494775-rq6s9_calico-system(938996c3-4ddf-4544-932d-7cc7b7f765d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 05:05:59.444661 kubelet[2773]: E1104 05:05:59.444534 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d98494775-rq6s9" podUID="938996c3-4ddf-4544-932d-7cc7b7f765d9" Nov 4 05:05:59.461137 containerd[1621]: time="2025-11-04T05:05:59.461075903Z" level=info msg="connecting to shim 717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4" address="unix:///run/containerd/s/9f7a4718a3ea0423eaf675be88824e7d83a4497445279cb10a3d6f4bb7db3581" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:05:59.493192 systemd[1]: Started cri-containerd-717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4.scope - libcontainer container 717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4. Nov 4 05:05:59.514775 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 05:05:59.555992 containerd[1621]: time="2025-11-04T05:05:59.554329970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d776774-g6zkv,Uid:8b185d97-46d2-4bf3-a4dc-561af0c44ee9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"717d4bc07b83aabde40cbdcb84ec9fddb4266d37534a9d0a11e38a45c0b179a4\"" Nov 4 05:05:59.556705 containerd[1621]: time="2025-11-04T05:05:59.556656335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 05:05:59.640053 kubelet[2773]: E1104 05:05:59.639994 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:05:59.644718 kubelet[2773]: E1104 05:05:59.644624 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d98494775-rq6s9" podUID="938996c3-4ddf-4544-932d-7cc7b7f765d9" Nov 4 05:05:59.879475 containerd[1621]: time="2025-11-04T05:05:59.879404184Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:05:59.922075 containerd[1621]: time="2025-11-04T05:05:59.922016149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 05:05:59.922075 containerd[1621]: time="2025-11-04T05:05:59.922055984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 05:05:59.922430 kubelet[2773]: E1104 05:05:59.922346 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:05:59.922508 kubelet[2773]: E1104 05:05:59.922429 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:05:59.922580 kubelet[2773]: E1104 05:05:59.922528 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b7d776774-g6zkv_calico-apiserver(8b185d97-46d2-4bf3-a4dc-561af0c44ee9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 05:05:59.922624 kubelet[2773]: E1104 05:05:59.922589 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" podUID="8b185d97-46d2-4bf3-a4dc-561af0c44ee9" Nov 4 05:06:00.039200 systemd-networkd[1525]: calif2f5e696ef4: Gained IPv6LL Nov 4 05:06:00.642778 kubelet[2773]: E1104 05:06:00.642707 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" podUID="8b185d97-46d2-4bf3-a4dc-561af0c44ee9" Nov 4 05:06:00.643720 kubelet[2773]: E1104 05:06:00.643605 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d98494775-rq6s9" podUID="938996c3-4ddf-4544-932d-7cc7b7f765d9" Nov 4 05:06:00.871866 systemd-networkd[1525]: vxlan.calico: Gained IPv6LL Nov 4 05:06:01.292950 kubelet[2773]: E1104 05:06:01.291940 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:01.294371 containerd[1621]: time="2025-11-04T05:06:01.292578904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-crx4t,Uid:c7cb6ad5-0cac-4665-beee-6095b16743d4,Namespace:kube-system,Attempt:0,}" Nov 4 05:06:01.305382 containerd[1621]: time="2025-11-04T05:06:01.305298198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9ml2,Uid:91e262bf-e00e-40d5-b480-4f480c906f2e,Namespace:calico-system,Attempt:0,}" Nov 4 05:06:01.310499 containerd[1621]: time="2025-11-04T05:06:01.310384751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-r6ddg,Uid:edca77af-e24f-4ad2-ba80-576707a67fed,Namespace:calico-system,Attempt:0,}" Nov 4 05:06:01.313556 containerd[1621]: time="2025-11-04T05:06:01.313480089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d776774-vjdtv,Uid:14f85a0a-9477-48d8-aa74-67ae5a309440,Namespace:calico-apiserver,Attempt:0,}" Nov 4 05:06:01.450719 systemd-networkd[1525]: cali8c792607037: Gained IPv6LL Nov 4 05:06:01.540302 systemd-networkd[1525]: cali969e8ea38d5: Link UP Nov 4 05:06:01.544456 systemd-networkd[1525]: cali969e8ea38d5: Gained carrier Nov 4 05:06:01.566149 containerd[1621]: 2025-11-04 05:06:01.383 [INFO][4335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--crx4t-eth0 coredns-66bc5c9577- kube-system c7cb6ad5-0cac-4665-beee-6095b16743d4 858 0 2025-11-04 05:05:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-crx4t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali969e8ea38d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Namespace="kube-system" Pod="coredns-66bc5c9577-crx4t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--crx4t-" Nov 4 05:06:01.566149 containerd[1621]: 2025-11-04 05:06:01.384 [INFO][4335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Namespace="kube-system" Pod="coredns-66bc5c9577-crx4t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" Nov 4 05:06:01.566149 containerd[1621]: 2025-11-04 05:06:01.441 [INFO][4391] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" HandleID="k8s-pod-network.19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Workload="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.442 [INFO][4391] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" HandleID="k8s-pod-network.19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Workload="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e580), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-crx4t", "timestamp":"2025-11-04 05:06:01.441514926 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.442 [INFO][4391] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.442 [INFO][4391] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.442 [INFO][4391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.461 [INFO][4391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" host="localhost" Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.476 [INFO][4391] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.491 [INFO][4391] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.494 [INFO][4391] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.499 [INFO][4391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:01.566478 containerd[1621]: 2025-11-04 05:06:01.499 [INFO][4391] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" host="localhost" Nov 4 05:06:01.566774 containerd[1621]: 2025-11-04 05:06:01.503 [INFO][4391] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab Nov 4 05:06:01.566774 containerd[1621]: 2025-11-04 05:06:01.514 [INFO][4391] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" host="localhost" Nov 4 05:06:01.566774 containerd[1621]: 2025-11-04 05:06:01.523 [INFO][4391] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" host="localhost" Nov 4 05:06:01.566774 containerd[1621]: 2025-11-04 05:06:01.523 [INFO][4391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" host="localhost" Nov 4 05:06:01.566774 containerd[1621]: 2025-11-04 05:06:01.524 [INFO][4391] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 05:06:01.566774 containerd[1621]: 2025-11-04 05:06:01.524 [INFO][4391] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" HandleID="k8s-pod-network.19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Workload="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" Nov 4 05:06:01.566947 containerd[1621]: 2025-11-04 05:06:01.530 [INFO][4335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Namespace="kube-system" Pod="coredns-66bc5c9577-crx4t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--crx4t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c7cb6ad5-0cac-4665-beee-6095b16743d4", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-crx4t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali969e8ea38d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:01.566947 containerd[1621]: 2025-11-04 05:06:01.531 [INFO][4335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Namespace="kube-system" Pod="coredns-66bc5c9577-crx4t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" Nov 4 05:06:01.566947 containerd[1621]: 2025-11-04 05:06:01.531 [INFO][4335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali969e8ea38d5 ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Namespace="kube-system" Pod="coredns-66bc5c9577-crx4t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" Nov 4 05:06:01.566947 containerd[1621]: 2025-11-04 05:06:01.544 [INFO][4335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Namespace="kube-system" Pod="coredns-66bc5c9577-crx4t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" Nov 4 05:06:01.566947 containerd[1621]: 2025-11-04 05:06:01.544 [INFO][4335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Namespace="kube-system" Pod="coredns-66bc5c9577-crx4t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--crx4t-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c7cb6ad5-0cac-4665-beee-6095b16743d4", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab", Pod:"coredns-66bc5c9577-crx4t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali969e8ea38d5", MAC:"f2:0f:b4:7a:8f:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:01.566947 containerd[1621]: 2025-11-04 05:06:01.563 [INFO][4335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" Namespace="kube-system" Pod="coredns-66bc5c9577-crx4t" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--crx4t-eth0" Nov 4 05:06:01.605871 containerd[1621]: time="2025-11-04T05:06:01.605803022Z" level=info msg="connecting to shim 19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab" address="unix:///run/containerd/s/e5840f9e5bd9d3f43d71a7c10ccc006a43a48e97eca0460a7c59a89f00f8c6a7" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:06:01.644219 systemd[1]: Started cri-containerd-19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab.scope - libcontainer container 19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab. Nov 4 05:06:01.648226 systemd-networkd[1525]: calib458123ce68: Link UP Nov 4 05:06:01.654457 systemd-networkd[1525]: calib458123ce68: Gained carrier Nov 4 05:06:01.669039 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 05:06:01.711890 containerd[1621]: time="2025-11-04T05:06:01.711812045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-crx4t,Uid:c7cb6ad5-0cac-4665-beee-6095b16743d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab\"" Nov 4 05:06:01.713148 kubelet[2773]: E1104 05:06:01.713109 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:01.738284 containerd[1621]: time="2025-11-04T05:06:01.738217527Z" level=info msg="CreateContainer within sandbox \"19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.423 [INFO][4347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--m9ml2-eth0 csi-node-driver- calico-system 91e262bf-e00e-40d5-b480-4f480c906f2e 732 0 2025-11-04 05:05:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-m9ml2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib458123ce68 [] [] }} ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Namespace="calico-system" Pod="csi-node-driver-m9ml2" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9ml2-" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.424 [INFO][4347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Namespace="calico-system" Pod="csi-node-driver-m9ml2" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9ml2-eth0" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.500 [INFO][4400] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" HandleID="k8s-pod-network.a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Workload="localhost-k8s-csi--node--driver--m9ml2-eth0" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.501 [INFO][4400] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" HandleID="k8s-pod-network.a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Workload="localhost-k8s-csi--node--driver--m9ml2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000539020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-m9ml2", "timestamp":"2025-11-04 05:06:01.500853938 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.501 [INFO][4400] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.523 [INFO][4400] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.525 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.557 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" host="localhost" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.577 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.585 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.589 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.597 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.597 [INFO][4400] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" host="localhost" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.600 [INFO][4400] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3 Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.612 [INFO][4400] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" host="localhost" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.622 [INFO][4400] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" host="localhost" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.622 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" host="localhost" Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.622 [INFO][4400] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 05:06:01.752396 containerd[1621]: 2025-11-04 05:06:01.622 [INFO][4400] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" HandleID="k8s-pod-network.a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Workload="localhost-k8s-csi--node--driver--m9ml2-eth0" Nov 4 05:06:01.753188 containerd[1621]: 2025-11-04 05:06:01.630 [INFO][4347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Namespace="calico-system" Pod="csi-node-driver-m9ml2" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9ml2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m9ml2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91e262bf-e00e-40d5-b480-4f480c906f2e", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-m9ml2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib458123ce68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:01.753188 containerd[1621]: 2025-11-04 05:06:01.632 [INFO][4347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Namespace="calico-system" Pod="csi-node-driver-m9ml2" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9ml2-eth0" Nov 4 05:06:01.753188 containerd[1621]: 2025-11-04 05:06:01.632 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib458123ce68 ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Namespace="calico-system" Pod="csi-node-driver-m9ml2" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9ml2-eth0" Nov 4 05:06:01.753188 containerd[1621]: 2025-11-04 05:06:01.655 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Namespace="calico-system" Pod="csi-node-driver-m9ml2" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9ml2-eth0" Nov 4 05:06:01.753188 containerd[1621]: 2025-11-04 05:06:01.656 [INFO][4347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Namespace="calico-system" Pod="csi-node-driver-m9ml2" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9ml2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m9ml2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"91e262bf-e00e-40d5-b480-4f480c906f2e", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3", Pod:"csi-node-driver-m9ml2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib458123ce68", MAC:"be:7d:27:98:ba:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:01.753188 containerd[1621]: 2025-11-04 05:06:01.745 [INFO][4347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" Namespace="calico-system" Pod="csi-node-driver-m9ml2" WorkloadEndpoint="localhost-k8s-csi--node--driver--m9ml2-eth0" Nov 4 05:06:01.842744 systemd-networkd[1525]: cali64947a592c0: Link UP Nov 4 05:06:01.844381 systemd-networkd[1525]: cali64947a592c0: Gained carrier Nov 4 05:06:01.904665 containerd[1621]: time="2025-11-04T05:06:01.904584983Z" level=info msg="Container 38e110443ae9c09fc6fcf78410a7d7e804d7f04b5826a1cda729720359b1334f: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.427 [INFO][4363] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0 calico-apiserver-6b7d776774- calico-apiserver 14f85a0a-9477-48d8-aa74-67ae5a309440 857 0 2025-11-04 05:05:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b7d776774 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b7d776774-vjdtv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64947a592c0 [] [] }} ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-vjdtv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.427 [INFO][4363] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-vjdtv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.501 [INFO][4409] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" HandleID="k8s-pod-network.b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Workload="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.502 [INFO][4409] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" HandleID="k8s-pod-network.b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Workload="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b7d776774-vjdtv", "timestamp":"2025-11-04 05:06:01.501717227 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.502 [INFO][4409] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.622 [INFO][4409] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.623 [INFO][4409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.657 [INFO][4409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" host="localhost" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.679 [INFO][4409] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.756 [INFO][4409] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.760 [INFO][4409] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.764 [INFO][4409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.764 [INFO][4409] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" host="localhost" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.766 [INFO][4409] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01 Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.803 [INFO][4409] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" host="localhost" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.832 [INFO][4409] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" host="localhost" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.832 [INFO][4409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" host="localhost" Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.832 [INFO][4409] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 05:06:01.908196 containerd[1621]: 2025-11-04 05:06:01.832 [INFO][4409] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" HandleID="k8s-pod-network.b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Workload="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" Nov 4 05:06:01.909717 containerd[1621]: 2025-11-04 05:06:01.836 [INFO][4363] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-vjdtv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0", GenerateName:"calico-apiserver-6b7d776774-", Namespace:"calico-apiserver", SelfLink:"", UID:"14f85a0a-9477-48d8-aa74-67ae5a309440", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7d776774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b7d776774-vjdtv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64947a592c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:01.909717 containerd[1621]: 2025-11-04 05:06:01.837 [INFO][4363] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-vjdtv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" Nov 4 05:06:01.909717 containerd[1621]: 2025-11-04 05:06:01.837 [INFO][4363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64947a592c0 ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-vjdtv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" Nov 4 05:06:01.909717 containerd[1621]: 2025-11-04 05:06:01.845 [INFO][4363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-vjdtv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" Nov 4 05:06:01.909717 containerd[1621]: 2025-11-04 05:06:01.847 [INFO][4363] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-vjdtv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0", GenerateName:"calico-apiserver-6b7d776774-", Namespace:"calico-apiserver", SelfLink:"", UID:"14f85a0a-9477-48d8-aa74-67ae5a309440", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7d776774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01", Pod:"calico-apiserver-6b7d776774-vjdtv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64947a592c0", MAC:"d2:22:a6:a2:ed:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:01.909717 containerd[1621]: 2025-11-04 05:06:01.902 [INFO][4363] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" Namespace="calico-apiserver" Pod="calico-apiserver-6b7d776774-vjdtv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b7d776774--vjdtv-eth0" Nov 4 05:06:01.931999 containerd[1621]: time="2025-11-04T05:06:01.931835881Z" level=info msg="connecting to shim a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3" address="unix:///run/containerd/s/b49632eb3fc910341e40bce117d25759eeb11d2e3f33d492514f5567da7b3f05" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:06:01.963280 systemd[1]: Started cri-containerd-a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3.scope - libcontainer container a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3. Nov 4 05:06:01.967286 containerd[1621]: time="2025-11-04T05:06:01.967143204Z" level=info msg="CreateContainer within sandbox \"19af74d0b7ba0d674c6df1b622ed44f8a72edbcdb6261f520154f757c92421ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38e110443ae9c09fc6fcf78410a7d7e804d7f04b5826a1cda729720359b1334f\"" Nov 4 05:06:01.968822 containerd[1621]: time="2025-11-04T05:06:01.968795985Z" level=info msg="StartContainer for \"38e110443ae9c09fc6fcf78410a7d7e804d7f04b5826a1cda729720359b1334f\"" Nov 4 05:06:01.970793 containerd[1621]: time="2025-11-04T05:06:01.970764929Z" level=info msg="connecting to shim 38e110443ae9c09fc6fcf78410a7d7e804d7f04b5826a1cda729720359b1334f" address="unix:///run/containerd/s/e5840f9e5bd9d3f43d71a7c10ccc006a43a48e97eca0460a7c59a89f00f8c6a7" protocol=ttrpc version=3 Nov 4 05:06:01.997750 systemd-networkd[1525]: calicbb2219f12d: Link UP Nov 4 05:06:02.002152 systemd-networkd[1525]: calicbb2219f12d: Gained carrier Nov 4 05:06:02.005720 containerd[1621]: time="2025-11-04T05:06:02.005647465Z" level=info msg="connecting to shim b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01" address="unix:///run/containerd/s/d2c568b7031d6bb4e539e75542cff16a18b76ca390b56cc05e3891d345b3352c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:06:02.017857 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 05:06:02.039484 systemd[1]: Started cri-containerd-38e110443ae9c09fc6fcf78410a7d7e804d7f04b5826a1cda729720359b1334f.scope - libcontainer container 38e110443ae9c09fc6fcf78410a7d7e804d7f04b5826a1cda729720359b1334f. Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.429 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--r6ddg-eth0 goldmane-7c778bb748- calico-system edca77af-e24f-4ad2-ba80-576707a67fed 862 0 2025-11-04 05:05:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-r6ddg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicbb2219f12d [] [] }} ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Namespace="calico-system" Pod="goldmane-7c778bb748-r6ddg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--r6ddg-" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.430 [INFO][4361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Namespace="calico-system" Pod="goldmane-7c778bb748-r6ddg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.514 [INFO][4407] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" HandleID="k8s-pod-network.6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Workload="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.514 [INFO][4407] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" HandleID="k8s-pod-network.6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Workload="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-r6ddg", "timestamp":"2025-11-04 05:06:01.514208503 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.514 [INFO][4407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.832 [INFO][4407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.833 [INFO][4407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.900 [INFO][4407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" host="localhost" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.915 [INFO][4407] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.930 [INFO][4407] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.933 [INFO][4407] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.937 [INFO][4407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.937 [INFO][4407] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" host="localhost" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.940 [INFO][4407] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32 Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.962 [INFO][4407] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" host="localhost" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.976 [INFO][4407] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" host="localhost" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.976 [INFO][4407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" host="localhost" Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.976 [INFO][4407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 05:06:02.089360 containerd[1621]: 2025-11-04 05:06:01.976 [INFO][4407] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" HandleID="k8s-pod-network.6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Workload="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" Nov 4 05:06:02.092321 containerd[1621]: 2025-11-04 05:06:01.986 [INFO][4361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Namespace="calico-system" Pod="goldmane-7c778bb748-r6ddg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--r6ddg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"edca77af-e24f-4ad2-ba80-576707a67fed", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-r6ddg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicbb2219f12d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:02.092321 containerd[1621]: 2025-11-04 05:06:01.986 [INFO][4361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Namespace="calico-system" Pod="goldmane-7c778bb748-r6ddg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" Nov 4 05:06:02.092321 containerd[1621]: 2025-11-04 05:06:01.987 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbb2219f12d ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Namespace="calico-system" Pod="goldmane-7c778bb748-r6ddg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" Nov 4 05:06:02.092321 containerd[1621]: 2025-11-04 05:06:02.012 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Namespace="calico-system" Pod="goldmane-7c778bb748-r6ddg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" Nov 4 05:06:02.092321 containerd[1621]: 2025-11-04 05:06:02.014 [INFO][4361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Namespace="calico-system" Pod="goldmane-7c778bb748-r6ddg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--r6ddg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"edca77af-e24f-4ad2-ba80-576707a67fed", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32", Pod:"goldmane-7c778bb748-r6ddg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicbb2219f12d", MAC:"72:d0:79:21:64:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:02.092321 containerd[1621]: 2025-11-04 05:06:02.042 [INFO][4361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" Namespace="calico-system" Pod="goldmane-7c778bb748-r6ddg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--r6ddg-eth0" Nov 4 05:06:02.089474 systemd[1]: Started cri-containerd-b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01.scope - libcontainer container b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01. Nov 4 05:06:02.125714 containerd[1621]: time="2025-11-04T05:06:02.125487879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m9ml2,Uid:91e262bf-e00e-40d5-b480-4f480c906f2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a35d02c7a705040e74ac4b7f68210ab7a36188aff68de6fbc9f2eae2c0472de3\"" Nov 4 05:06:02.129132 containerd[1621]: time="2025-11-04T05:06:02.129006190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 05:06:02.133825 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 05:06:02.146247 containerd[1621]: time="2025-11-04T05:06:02.146186662Z" level=info msg="StartContainer for \"38e110443ae9c09fc6fcf78410a7d7e804d7f04b5826a1cda729720359b1334f\" returns successfully" Nov 4 05:06:02.166686 containerd[1621]: time="2025-11-04T05:06:02.166243572Z" level=info msg="connecting to shim 6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32" address="unix:///run/containerd/s/3feb225bc283f9251b1128eeb425af26cc1e454ae8318fed13b8bcc3d5050ad3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:06:02.224798 containerd[1621]: time="2025-11-04T05:06:02.223930609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d776774-vjdtv,Uid:14f85a0a-9477-48d8-aa74-67ae5a309440,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b22c59d8f964f9b5e3ad9cc81753db1d6a64709cb48075b6f9a21cffb25cfe01\"" Nov 4 05:06:02.229198 systemd[1]: Started cri-containerd-6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32.scope - libcontainer container 6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32. Nov 4 05:06:02.262667 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 05:06:02.274436 kubelet[2773]: E1104 05:06:02.274374 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:02.276660 containerd[1621]: time="2025-11-04T05:06:02.276540741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lw6sf,Uid:c909167d-9a08-4ecf-ae50-e53abffc84ba,Namespace:kube-system,Attempt:0,}" Nov 4 05:06:02.282585 containerd[1621]: time="2025-11-04T05:06:02.282424310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699c5ddd64-52vk4,Uid:a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a,Namespace:calico-system,Attempt:0,}" Nov 4 05:06:02.286523 containerd[1621]: time="2025-11-04T05:06:02.286453679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864f4c9b8-bxhc8,Uid:a34c144d-d4e5-45d0-a3e5-87f853e234f9,Namespace:calico-apiserver,Attempt:0,}" Nov 4 05:06:02.374853 containerd[1621]: time="2025-11-04T05:06:02.374444580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-r6ddg,Uid:edca77af-e24f-4ad2-ba80-576707a67fed,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e3ae743fce62df1683f286a820b5526435337e39b35af7f610761709051bb32\"" Nov 4 05:06:02.444993 containerd[1621]: time="2025-11-04T05:06:02.444429049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:02.522540 containerd[1621]: time="2025-11-04T05:06:02.522240352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 05:06:02.522896 containerd[1621]: time="2025-11-04T05:06:02.522666742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:02.524456 kubelet[2773]: E1104 05:06:02.524180 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 05:06:02.526036 kubelet[2773]: E1104 05:06:02.524596 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 05:06:02.526341 kubelet[2773]: E1104 05:06:02.526269 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-m9ml2_calico-system(91e262bf-e00e-40d5-b480-4f480c906f2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:02.527220 containerd[1621]: time="2025-11-04T05:06:02.526815465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 05:06:02.682884 kubelet[2773]: E1104 05:06:02.682237 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:02.720586 kubelet[2773]: I1104 05:06:02.720349 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-crx4t" podStartSLOduration=43.720295275 podStartE2EDuration="43.720295275s" podCreationTimestamp="2025-11-04 05:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:06:02.716086971 +0000 UTC m=+49.632830280" watchObservedRunningTime="2025-11-04 05:06:02.720295275 +0000 UTC m=+49.637038574" Nov 4 05:06:02.744199 systemd-networkd[1525]: calif27cd097083: Link UP Nov 4 05:06:02.746496 systemd-networkd[1525]: calif27cd097083: Gained carrier Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.416 [INFO][4677] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--lw6sf-eth0 coredns-66bc5c9577- kube-system c909167d-9a08-4ecf-ae50-e53abffc84ba 853 0 2025-11-04 05:05:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-lw6sf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif27cd097083 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Namespace="kube-system" Pod="coredns-66bc5c9577-lw6sf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lw6sf-" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.417 [INFO][4677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Namespace="kube-system" Pod="coredns-66bc5c9577-lw6sf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.587 [INFO][4729] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" HandleID="k8s-pod-network.fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Workload="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.587 [INFO][4729] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" HandleID="k8s-pod-network.fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Workload="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051eb80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-lw6sf", "timestamp":"2025-11-04 05:06:02.587353001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.588 [INFO][4729] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.588 [INFO][4729] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.588 [INFO][4729] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.603 [INFO][4729] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" host="localhost" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.628 [INFO][4729] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.646 [INFO][4729] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.653 [INFO][4729] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.660 [INFO][4729] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.661 [INFO][4729] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" host="localhost" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.665 [INFO][4729] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6 Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.699 [INFO][4729] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" host="localhost" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.721 [INFO][4729] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" host="localhost" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.722 [INFO][4729] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" host="localhost" Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.723 [INFO][4729] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 05:06:02.780367 containerd[1621]: 2025-11-04 05:06:02.723 [INFO][4729] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" HandleID="k8s-pod-network.fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Workload="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" Nov 4 05:06:02.781543 containerd[1621]: 2025-11-04 05:06:02.733 [INFO][4677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Namespace="kube-system" Pod="coredns-66bc5c9577-lw6sf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lw6sf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c909167d-9a08-4ecf-ae50-e53abffc84ba", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-lw6sf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif27cd097083", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:02.781543 containerd[1621]: 2025-11-04 05:06:02.735 [INFO][4677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Namespace="kube-system" Pod="coredns-66bc5c9577-lw6sf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" Nov 4 05:06:02.781543 containerd[1621]: 2025-11-04 05:06:02.735 [INFO][4677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif27cd097083 ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Namespace="kube-system" Pod="coredns-66bc5c9577-lw6sf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" Nov 4 05:06:02.781543 containerd[1621]: 2025-11-04 05:06:02.748 [INFO][4677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Namespace="kube-system" Pod="coredns-66bc5c9577-lw6sf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" Nov 4 05:06:02.781543 containerd[1621]: 2025-11-04 05:06:02.751 [INFO][4677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Namespace="kube-system" Pod="coredns-66bc5c9577-lw6sf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--lw6sf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c909167d-9a08-4ecf-ae50-e53abffc84ba", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6", Pod:"coredns-66bc5c9577-lw6sf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif27cd097083", MAC:"4e:05:f6:62:ba:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:02.781543 containerd[1621]: 2025-11-04 05:06:02.777 [INFO][4677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" Namespace="kube-system" Pod="coredns-66bc5c9577-lw6sf" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--lw6sf-eth0" Nov 4 05:06:02.835859 systemd-networkd[1525]: cali73dbbed0628: Link UP Nov 4 05:06:02.837141 containerd[1621]: time="2025-11-04T05:06:02.836745709Z" level=info msg="connecting to shim fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6" address="unix:///run/containerd/s/3fbca574c97a59919a18dbed6c67626157051575c6d93e19c87cfbee23635c01" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:06:02.837523 systemd-networkd[1525]: cali73dbbed0628: Gained carrier Nov 4 05:06:02.878265 containerd[1621]: time="2025-11-04T05:06:02.878207658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:02.884332 systemd[1]: Started cri-containerd-fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6.scope - libcontainer container fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6. Nov 4 05:06:02.906311 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 05:06:02.919262 systemd-networkd[1525]: calib458123ce68: Gained IPv6LL Nov 4 05:06:02.950159 containerd[1621]: time="2025-11-04T05:06:02.949972577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 05:06:02.950159 containerd[1621]: time="2025-11-04T05:06:02.949981443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:02.950379 kubelet[2773]: E1104 05:06:02.950292 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:02.950379 kubelet[2773]: E1104 05:06:02.950373 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:02.950630 kubelet[2773]: E1104 05:06:02.950596 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b7d776774-vjdtv_calico-apiserver(14f85a0a-9477-48d8-aa74-67ae5a309440): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:02.950680 kubelet[2773]: E1104 05:06:02.950642 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" podUID="14f85a0a-9477-48d8-aa74-67ae5a309440" Nov 4 05:06:02.951543 containerd[1621]: time="2025-11-04T05:06:02.951500773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 05:06:03.019448 containerd[1621]: time="2025-11-04T05:06:03.019364272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lw6sf,Uid:c909167d-9a08-4ecf-ae50-e53abffc84ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6\"" Nov 4 05:06:03.025304 kubelet[2773]: E1104 05:06:03.025235 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.519 [INFO][4701] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0 calico-apiserver-6864f4c9b8- calico-apiserver a34c144d-d4e5-45d0-a3e5-87f853e234f9 861 0 2025-11-04 05:05:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6864f4c9b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6864f4c9b8-bxhc8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali73dbbed0628 [] [] }} ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Namespace="calico-apiserver" Pod="calico-apiserver-6864f4c9b8-bxhc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.520 [INFO][4701] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Namespace="calico-apiserver" Pod="calico-apiserver-6864f4c9b8-bxhc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.621 [INFO][4738] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" HandleID="k8s-pod-network.422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Workload="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.622 [INFO][4738] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" HandleID="k8s-pod-network.422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Workload="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043b250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6864f4c9b8-bxhc8", "timestamp":"2025-11-04 05:06:02.621773998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.622 [INFO][4738] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.722 [INFO][4738] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.723 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.737 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" host="localhost" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.750 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.770 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.779 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.783 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.783 [INFO][4738] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" host="localhost" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.786 [INFO][4738] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.794 [INFO][4738] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" host="localhost" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.809 [INFO][4738] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" host="localhost" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.809 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" host="localhost" Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.811 [INFO][4738] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 05:06:03.040301 containerd[1621]: 2025-11-04 05:06:02.811 [INFO][4738] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" HandleID="k8s-pod-network.422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Workload="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" Nov 4 05:06:03.041786 containerd[1621]: 2025-11-04 05:06:02.831 [INFO][4701] cni-plugin/k8s.go 418: Populated endpoint ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Namespace="calico-apiserver" Pod="calico-apiserver-6864f4c9b8-bxhc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0", GenerateName:"calico-apiserver-6864f4c9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a34c144d-d4e5-45d0-a3e5-87f853e234f9", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6864f4c9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6864f4c9b8-bxhc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73dbbed0628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:03.041786 containerd[1621]: 2025-11-04 05:06:02.831 [INFO][4701] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Namespace="calico-apiserver" Pod="calico-apiserver-6864f4c9b8-bxhc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" Nov 4 05:06:03.041786 containerd[1621]: 2025-11-04 05:06:02.831 [INFO][4701] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73dbbed0628 ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Namespace="calico-apiserver" Pod="calico-apiserver-6864f4c9b8-bxhc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" Nov 4 05:06:03.041786 containerd[1621]: 2025-11-04 05:06:02.838 [INFO][4701] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Namespace="calico-apiserver" Pod="calico-apiserver-6864f4c9b8-bxhc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" Nov 4 05:06:03.041786 containerd[1621]: 2025-11-04 05:06:02.840 [INFO][4701] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Namespace="calico-apiserver" Pod="calico-apiserver-6864f4c9b8-bxhc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0", GenerateName:"calico-apiserver-6864f4c9b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a34c144d-d4e5-45d0-a3e5-87f853e234f9", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6864f4c9b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b", Pod:"calico-apiserver-6864f4c9b8-bxhc8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73dbbed0628", MAC:"aa:f4:4e:09:c8:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:03.041786 containerd[1621]: 2025-11-04 05:06:03.033 [INFO][4701] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" Namespace="calico-apiserver" Pod="calico-apiserver-6864f4c9b8-bxhc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6864f4c9b8--bxhc8-eth0" Nov 4 05:06:03.048010 containerd[1621]: time="2025-11-04T05:06:03.042894014Z" level=info msg="CreateContainer within sandbox \"fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 05:06:03.086235 containerd[1621]: time="2025-11-04T05:06:03.086178007Z" level=info msg="Container 98ffc8e67ea912e16614b482ef3e143c22eb3eab5588ea627631102adf5975c1: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:06:03.107727 containerd[1621]: time="2025-11-04T05:06:03.107463430Z" level=info msg="CreateContainer within sandbox \"fa64ff62ca8b36085915e272bff6af93af81ba7473437e41006ee031e6e850d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98ffc8e67ea912e16614b482ef3e143c22eb3eab5588ea627631102adf5975c1\"" Nov 4 05:06:03.109890 containerd[1621]: time="2025-11-04T05:06:03.109422144Z" level=info msg="StartContainer for \"98ffc8e67ea912e16614b482ef3e143c22eb3eab5588ea627631102adf5975c1\"" Nov 4 05:06:03.110108 containerd[1621]: time="2025-11-04T05:06:03.110079297Z" level=info msg="connecting to shim 422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b" address="unix:///run/containerd/s/2b71b6f544d3069f7a637250ebdcac2d282cede0981aa82211a304853b06caf3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:06:03.112059 containerd[1621]: time="2025-11-04T05:06:03.112031910Z" level=info msg="connecting to shim 98ffc8e67ea912e16614b482ef3e143c22eb3eab5588ea627631102adf5975c1" address="unix:///run/containerd/s/3fbca574c97a59919a18dbed6c67626157051575c6d93e19c87cfbee23635c01" protocol=ttrpc version=3 Nov 4 05:06:03.113449 systemd-networkd[1525]: cali4c59de2879c: Link UP Nov 4 05:06:03.115211 systemd-networkd[1525]: cali4c59de2879c: Gained carrier Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:02.527 [INFO][4699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0 calico-kube-controllers-699c5ddd64- calico-system a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a 860 0 2025-11-04 05:05:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:699c5ddd64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-699c5ddd64-52vk4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4c59de2879c [] [] }} ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Namespace="calico-system" Pod="calico-kube-controllers-699c5ddd64-52vk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:02.528 [INFO][4699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Namespace="calico-system" Pod="calico-kube-controllers-699c5ddd64-52vk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:02.626 [INFO][4745] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" HandleID="k8s-pod-network.d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Workload="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:02.627 [INFO][4745] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" HandleID="k8s-pod-network.d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Workload="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000117780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-699c5ddd64-52vk4", "timestamp":"2025-11-04 05:06:02.62695034 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:02.627 [INFO][4745] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:02.810 [INFO][4745] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:02.810 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.023 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" host="localhost" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.040 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.055 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.058 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.062 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.062 [INFO][4745] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" host="localhost" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.065 [INFO][4745] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6 Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.079 [INFO][4745] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" host="localhost" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.098 [INFO][4745] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" host="localhost" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.099 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" host="localhost" Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.099 [INFO][4745] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 05:06:03.144995 containerd[1621]: 2025-11-04 05:06:03.099 [INFO][4745] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" HandleID="k8s-pod-network.d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Workload="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" Nov 4 05:06:03.145757 containerd[1621]: 2025-11-04 05:06:03.105 [INFO][4699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Namespace="calico-system" Pod="calico-kube-controllers-699c5ddd64-52vk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0", GenerateName:"calico-kube-controllers-699c5ddd64-", Namespace:"calico-system", SelfLink:"", UID:"a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699c5ddd64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-699c5ddd64-52vk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c59de2879c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:03.145757 containerd[1621]: 2025-11-04 05:06:03.105 [INFO][4699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Namespace="calico-system" Pod="calico-kube-controllers-699c5ddd64-52vk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" Nov 4 05:06:03.145757 containerd[1621]: 2025-11-04 05:06:03.105 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c59de2879c ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Namespace="calico-system" Pod="calico-kube-controllers-699c5ddd64-52vk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" Nov 4 05:06:03.145757 containerd[1621]: 2025-11-04 05:06:03.114 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Namespace="calico-system" Pod="calico-kube-controllers-699c5ddd64-52vk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" Nov 4 05:06:03.145757 containerd[1621]: 2025-11-04 05:06:03.116 [INFO][4699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Namespace="calico-system" Pod="calico-kube-controllers-699c5ddd64-52vk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0", GenerateName:"calico-kube-controllers-699c5ddd64-", Namespace:"calico-system", SelfLink:"", UID:"a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 5, 5, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"699c5ddd64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6", Pod:"calico-kube-controllers-699c5ddd64-52vk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c59de2879c", MAC:"d2:a3:74:9d:88:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 05:06:03.145757 containerd[1621]: 2025-11-04 05:06:03.133 [INFO][4699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" Namespace="calico-system" Pod="calico-kube-controllers-699c5ddd64-52vk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--699c5ddd64--52vk4-eth0" Nov 4 05:06:03.154215 systemd[1]: Started cri-containerd-98ffc8e67ea912e16614b482ef3e143c22eb3eab5588ea627631102adf5975c1.scope - libcontainer container 98ffc8e67ea912e16614b482ef3e143c22eb3eab5588ea627631102adf5975c1. Nov 4 05:06:03.171231 systemd[1]: Started cri-containerd-422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b.scope - libcontainer container 422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b. Nov 4 05:06:03.176339 systemd-networkd[1525]: cali64947a592c0: Gained IPv6LL Nov 4 05:06:03.224538 containerd[1621]: time="2025-11-04T05:06:03.224220080Z" level=info msg="connecting to shim d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6" address="unix:///run/containerd/s/67c0715c8d381de72115d25647eb1fe332e92cd0a44fa6249e8d53cce1c9b179" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:06:03.234739 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 05:06:03.277307 systemd[1]: Started cri-containerd-d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6.scope - libcontainer container d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6. Nov 4 05:06:03.291528 containerd[1621]: time="2025-11-04T05:06:03.290925803Z" level=info msg="StartContainer for \"98ffc8e67ea912e16614b482ef3e143c22eb3eab5588ea627631102adf5975c1\" returns successfully" Nov 4 05:06:03.333725 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 05:06:03.380188 containerd[1621]: time="2025-11-04T05:06:03.380130905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:03.398099 containerd[1621]: time="2025-11-04T05:06:03.398029304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6864f4c9b8-bxhc8,Uid:a34c144d-d4e5-45d0-a3e5-87f853e234f9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"422721722290256e048b51628cc6929e5d563d15bda17c8c8a4ea10de185d02b\"" Nov 4 05:06:03.495386 systemd-networkd[1525]: cali969e8ea38d5: Gained IPv6LL Nov 4 05:06:03.496615 systemd-networkd[1525]: calicbb2219f12d: Gained IPv6LL Nov 4 05:06:03.570946 containerd[1621]: time="2025-11-04T05:06:03.570719759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-699c5ddd64-52vk4,Uid:a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d53d570c1ca8a868bdb356b35bbfd344d04b7ee7f51402c230737e0642eda7e6\"" Nov 4 05:06:03.584438 containerd[1621]: time="2025-11-04T05:06:03.584349840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 05:06:03.584614 containerd[1621]: time="2025-11-04T05:06:03.584392259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:03.585052 kubelet[2773]: E1104 05:06:03.584985 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 05:06:03.585052 kubelet[2773]: E1104 05:06:03.585044 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 05:06:03.585598 kubelet[2773]: E1104 05:06:03.585217 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-r6ddg_calico-system(edca77af-e24f-4ad2-ba80-576707a67fed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:03.585598 kubelet[2773]: E1104 05:06:03.585263 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-r6ddg" podUID="edca77af-e24f-4ad2-ba80-576707a67fed" Nov 4 05:06:03.585874 containerd[1621]: time="2025-11-04T05:06:03.585740478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 05:06:03.693710 kubelet[2773]: E1104 05:06:03.693671 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:03.695682 kubelet[2773]: E1104 05:06:03.695642 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:03.696412 kubelet[2773]: E1104 05:06:03.696360 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" podUID="14f85a0a-9477-48d8-aa74-67ae5a309440" Nov 4 05:06:03.697237 kubelet[2773]: E1104 05:06:03.696829 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-r6ddg" podUID="edca77af-e24f-4ad2-ba80-576707a67fed" Nov 4 05:06:03.713212 kubelet[2773]: I1104 05:06:03.712985 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lw6sf" podStartSLOduration=44.712930012 podStartE2EDuration="44.712930012s" podCreationTimestamp="2025-11-04 05:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:06:03.712881801 +0000 UTC m=+50.629625100" watchObservedRunningTime="2025-11-04 05:06:03.712930012 +0000 UTC m=+50.629673321" Nov 4 05:06:03.929206 containerd[1621]: time="2025-11-04T05:06:03.928893989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:03.931128 containerd[1621]: time="2025-11-04T05:06:03.931046839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 05:06:03.931128 containerd[1621]: time="2025-11-04T05:06:03.931089819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:03.931484 kubelet[2773]: E1104 05:06:03.931431 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 05:06:03.931928 kubelet[2773]: E1104 05:06:03.931500 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 05:06:03.931928 kubelet[2773]: E1104 05:06:03.931762 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-m9ml2_calico-system(91e262bf-e00e-40d5-b480-4f480c906f2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:03.931928 kubelet[2773]: E1104 05:06:03.931816 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:06:03.932353 containerd[1621]: time="2025-11-04T05:06:03.932317903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 05:06:04.272555 containerd[1621]: time="2025-11-04T05:06:04.272360640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:04.273866 containerd[1621]: time="2025-11-04T05:06:04.273790634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 05:06:04.274050 containerd[1621]: time="2025-11-04T05:06:04.273849825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:04.274164 kubelet[2773]: E1104 05:06:04.274111 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:04.274211 kubelet[2773]: E1104 05:06:04.274177 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:04.274494 kubelet[2773]: E1104 05:06:04.274437 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6864f4c9b8-bxhc8_calico-apiserver(a34c144d-d4e5-45d0-a3e5-87f853e234f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:04.274623 kubelet[2773]: E1104 05:06:04.274512 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" podUID="a34c144d-d4e5-45d0-a3e5-87f853e234f9" Nov 4 05:06:04.274665 containerd[1621]: time="2025-11-04T05:06:04.274632874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 05:06:04.520160 systemd-networkd[1525]: cali73dbbed0628: Gained IPv6LL Nov 4 05:06:04.595193 containerd[1621]: time="2025-11-04T05:06:04.595122335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:04.660746 containerd[1621]: time="2025-11-04T05:06:04.660675551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 05:06:04.660914 containerd[1621]: time="2025-11-04T05:06:04.660731486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:04.661156 kubelet[2773]: E1104 05:06:04.661093 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 05:06:04.661253 kubelet[2773]: E1104 05:06:04.661159 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 05:06:04.661297 kubelet[2773]: E1104 05:06:04.661256 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-699c5ddd64-52vk4_calico-system(a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:04.661354 kubelet[2773]: E1104 05:06:04.661323 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" podUID="a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a" Nov 4 05:06:04.697424 kubelet[2773]: E1104 05:06:04.697336 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:04.697785 kubelet[2773]: E1104 05:06:04.697739 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:04.698587 kubelet[2773]: E1104 05:06:04.698513 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" podUID="a34c144d-d4e5-45d0-a3e5-87f853e234f9" Nov 4 05:06:04.698587 kubelet[2773]: E1104 05:06:04.698550 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" podUID="a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a" Nov 4 05:06:04.699173 kubelet[2773]: E1104 05:06:04.699107 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:06:04.775209 systemd-networkd[1525]: calif27cd097083: Gained IPv6LL Nov 4 05:06:04.903219 systemd-networkd[1525]: cali4c59de2879c: Gained IPv6LL Nov 4 05:06:05.654106 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:51620.service - OpenSSH per-connection server daemon (10.0.0.1:51620). Nov 4 05:06:05.699706 kubelet[2773]: E1104 05:06:05.699666 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:05.794426 sshd[4981]: Accepted publickey for core from 10.0.0.1 port 51620 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:05.796403 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:05.802301 systemd-logind[1594]: New session 8 of user core. Nov 4 05:06:05.808185 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 05:06:05.934663 sshd[4984]: Connection closed by 10.0.0.1 port 51620 Nov 4 05:06:05.934906 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:05.940048 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:51620.service: Deactivated successfully. Nov 4 05:06:05.942432 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 05:06:05.943283 systemd-logind[1594]: Session 8 logged out. Waiting for processes to exit. Nov 4 05:06:05.944744 systemd-logind[1594]: Removed session 8. Nov 4 05:06:10.947824 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:51634.service - OpenSSH per-connection server daemon (10.0.0.1:51634). Nov 4 05:06:11.017582 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 51634 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:11.019238 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:11.024572 systemd-logind[1594]: New session 9 of user core. Nov 4 05:06:11.033271 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 05:06:11.115539 sshd[5011]: Connection closed by 10.0.0.1 port 51634 Nov 4 05:06:11.115903 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:11.121170 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:51634.service: Deactivated successfully. Nov 4 05:06:11.124277 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 05:06:11.125843 systemd-logind[1594]: Session 9 logged out. Waiting for processes to exit. Nov 4 05:06:11.127971 systemd-logind[1594]: Removed session 9. Nov 4 05:06:11.271701 containerd[1621]: time="2025-11-04T05:06:11.271547894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 05:06:11.607616 containerd[1621]: time="2025-11-04T05:06:11.607524435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:11.609785 containerd[1621]: time="2025-11-04T05:06:11.609658338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 05:06:11.609785 containerd[1621]: time="2025-11-04T05:06:11.609704265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:11.610041 kubelet[2773]: E1104 05:06:11.609948 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 05:06:11.610041 kubelet[2773]: E1104 05:06:11.610038 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 05:06:11.610545 kubelet[2773]: E1104 05:06:11.610160 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-d98494775-rq6s9_calico-system(938996c3-4ddf-4544-932d-7cc7b7f765d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:11.611589 containerd[1621]: time="2025-11-04T05:06:11.611550527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 05:06:11.976090 containerd[1621]: time="2025-11-04T05:06:11.975888288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:11.978077 containerd[1621]: time="2025-11-04T05:06:11.978025137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 05:06:11.978206 containerd[1621]: time="2025-11-04T05:06:11.978118603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:11.978356 kubelet[2773]: E1104 05:06:11.978291 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 05:06:11.978356 kubelet[2773]: E1104 05:06:11.978348 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 05:06:11.978446 kubelet[2773]: E1104 05:06:11.978435 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-d98494775-rq6s9_calico-system(938996c3-4ddf-4544-932d-7cc7b7f765d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:11.978509 kubelet[2773]: E1104 05:06:11.978474 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d98494775-rq6s9" podUID="938996c3-4ddf-4544-932d-7cc7b7f765d9" Nov 4 05:06:15.269383 containerd[1621]: time="2025-11-04T05:06:15.269317214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 05:06:15.567387 containerd[1621]: time="2025-11-04T05:06:15.567330003Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:15.642441 containerd[1621]: time="2025-11-04T05:06:15.642275892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:15.642441 containerd[1621]: time="2025-11-04T05:06:15.642317913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 05:06:15.642738 kubelet[2773]: E1104 05:06:15.642586 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:15.642738 kubelet[2773]: E1104 05:06:15.642646 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:15.643300 kubelet[2773]: E1104 05:06:15.642853 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6864f4c9b8-bxhc8_calico-apiserver(a34c144d-d4e5-45d0-a3e5-87f853e234f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:15.643300 kubelet[2773]: E1104 05:06:15.642908 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" podUID="a34c144d-d4e5-45d0-a3e5-87f853e234f9" Nov 4 05:06:15.643357 containerd[1621]: time="2025-11-04T05:06:15.643106284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 05:06:15.968496 containerd[1621]: time="2025-11-04T05:06:15.968343611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:16.093764 containerd[1621]: time="2025-11-04T05:06:16.093692056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:16.093930 containerd[1621]: time="2025-11-04T05:06:16.093742733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 05:06:16.094111 kubelet[2773]: E1104 05:06:16.094061 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:16.094167 kubelet[2773]: E1104 05:06:16.094116 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:16.094220 kubelet[2773]: E1104 05:06:16.094200 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b7d776774-g6zkv_calico-apiserver(8b185d97-46d2-4bf3-a4dc-561af0c44ee9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:16.094259 kubelet[2773]: E1104 05:06:16.094242 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" podUID="8b185d97-46d2-4bf3-a4dc-561af0c44ee9" Nov 4 05:06:16.131719 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:34370.service - OpenSSH per-connection server daemon (10.0.0.1:34370). Nov 4 05:06:16.197077 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 34370 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:16.199173 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:16.205305 systemd-logind[1594]: New session 10 of user core. Nov 4 05:06:16.215193 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 05:06:16.268946 containerd[1621]: time="2025-11-04T05:06:16.268706136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 05:06:16.312313 sshd[5033]: Connection closed by 10.0.0.1 port 34370 Nov 4 05:06:16.312787 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:16.318501 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:34370.service: Deactivated successfully. Nov 4 05:06:16.321107 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 05:06:16.322303 systemd-logind[1594]: Session 10 logged out. Waiting for processes to exit. Nov 4 05:06:16.323653 systemd-logind[1594]: Removed session 10. Nov 4 05:06:16.624662 containerd[1621]: time="2025-11-04T05:06:16.624606464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:16.778274 containerd[1621]: time="2025-11-04T05:06:16.778188763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 05:06:16.778274 containerd[1621]: time="2025-11-04T05:06:16.778227156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:16.778503 kubelet[2773]: E1104 05:06:16.778453 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 05:06:16.778887 kubelet[2773]: E1104 05:06:16.778503 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 05:06:16.778887 kubelet[2773]: E1104 05:06:16.778587 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-m9ml2_calico-system(91e262bf-e00e-40d5-b480-4f480c906f2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:16.779494 containerd[1621]: time="2025-11-04T05:06:16.779451621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 05:06:17.191686 containerd[1621]: time="2025-11-04T05:06:17.191625456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:17.203530 containerd[1621]: time="2025-11-04T05:06:17.203469009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 05:06:17.203645 containerd[1621]: time="2025-11-04T05:06:17.203581315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:17.203817 kubelet[2773]: E1104 05:06:17.203755 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 05:06:17.203817 kubelet[2773]: E1104 05:06:17.203814 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 05:06:17.203909 kubelet[2773]: E1104 05:06:17.203893 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-m9ml2_calico-system(91e262bf-e00e-40d5-b480-4f480c906f2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:17.203962 kubelet[2773]: E1104 05:06:17.203932 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:06:17.268469 containerd[1621]: time="2025-11-04T05:06:17.268070087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 05:06:17.590059 containerd[1621]: time="2025-11-04T05:06:17.589949404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:17.591293 containerd[1621]: time="2025-11-04T05:06:17.591242790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 05:06:17.591378 containerd[1621]: time="2025-11-04T05:06:17.591316842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:17.591626 kubelet[2773]: E1104 05:06:17.591566 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:17.591626 kubelet[2773]: E1104 05:06:17.591626 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:17.591803 kubelet[2773]: E1104 05:06:17.591713 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b7d776774-vjdtv_calico-apiserver(14f85a0a-9477-48d8-aa74-67ae5a309440): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:17.591803 kubelet[2773]: E1104 05:06:17.591746 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" podUID="14f85a0a-9477-48d8-aa74-67ae5a309440" Nov 4 05:06:18.269317 containerd[1621]: time="2025-11-04T05:06:18.269212586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 05:06:18.629983 containerd[1621]: time="2025-11-04T05:06:18.629900950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:18.631205 containerd[1621]: time="2025-11-04T05:06:18.631163735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 05:06:18.631301 containerd[1621]: time="2025-11-04T05:06:18.631220003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:18.631481 kubelet[2773]: E1104 05:06:18.631428 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 05:06:18.631889 kubelet[2773]: E1104 05:06:18.631482 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 05:06:18.631889 kubelet[2773]: E1104 05:06:18.631576 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-699c5ddd64-52vk4_calico-system(a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:18.631889 kubelet[2773]: E1104 05:06:18.631610 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" podUID="a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a" Nov 4 05:06:19.267306 containerd[1621]: time="2025-11-04T05:06:19.267243309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 05:06:19.613546 containerd[1621]: time="2025-11-04T05:06:19.613457419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:19.757447 containerd[1621]: time="2025-11-04T05:06:19.757355416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 05:06:19.757630 containerd[1621]: time="2025-11-04T05:06:19.757453102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:19.757664 kubelet[2773]: E1104 05:06:19.757618 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 05:06:19.758153 kubelet[2773]: E1104 05:06:19.757668 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 05:06:19.758153 kubelet[2773]: E1104 05:06:19.757767 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-r6ddg_calico-system(edca77af-e24f-4ad2-ba80-576707a67fed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:19.758153 kubelet[2773]: E1104 05:06:19.757815 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-r6ddg" podUID="edca77af-e24f-4ad2-ba80-576707a67fed" Nov 4 05:06:21.337349 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:34380.service - OpenSSH per-connection server daemon (10.0.0.1:34380). Nov 4 05:06:21.421132 sshd[5055]: Accepted publickey for core from 10.0.0.1 port 34380 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:21.422655 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:21.427516 systemd-logind[1594]: New session 11 of user core. Nov 4 05:06:21.437137 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 05:06:21.516715 sshd[5058]: Connection closed by 10.0.0.1 port 34380 Nov 4 05:06:21.517183 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:21.529617 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:34380.service: Deactivated successfully. Nov 4 05:06:21.531858 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 05:06:21.533061 systemd-logind[1594]: Session 11 logged out. Waiting for processes to exit. Nov 4 05:06:21.535897 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:34388.service - OpenSSH per-connection server daemon (10.0.0.1:34388). Nov 4 05:06:21.536787 systemd-logind[1594]: Removed session 11. Nov 4 05:06:21.595406 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 34388 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:21.596767 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:21.601755 systemd-logind[1594]: New session 12 of user core. Nov 4 05:06:21.611119 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 05:06:21.737816 sshd[5075]: Connection closed by 10.0.0.1 port 34388 Nov 4 05:06:21.739464 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:21.750485 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:34388.service: Deactivated successfully. Nov 4 05:06:21.752871 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 05:06:21.755546 systemd-logind[1594]: Session 12 logged out. Waiting for processes to exit. Nov 4 05:06:21.758371 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:34394.service - OpenSSH per-connection server daemon (10.0.0.1:34394). Nov 4 05:06:21.759246 systemd-logind[1594]: Removed session 12. Nov 4 05:06:21.844167 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 34394 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:21.845796 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:21.850710 systemd-logind[1594]: New session 13 of user core. Nov 4 05:06:21.864124 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 05:06:22.163040 sshd[5089]: Connection closed by 10.0.0.1 port 34394 Nov 4 05:06:22.163250 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:22.169281 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:34394.service: Deactivated successfully. Nov 4 05:06:22.171456 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 05:06:22.172290 systemd-logind[1594]: Session 13 logged out. Waiting for processes to exit. Nov 4 05:06:22.173645 systemd-logind[1594]: Removed session 13. Nov 4 05:06:23.267621 kubelet[2773]: E1104 05:06:23.267543 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:24.267577 kubelet[2773]: E1104 05:06:24.267503 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d98494775-rq6s9" podUID="938996c3-4ddf-4544-932d-7cc7b7f765d9" Nov 4 05:06:27.186873 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:58080.service - OpenSSH per-connection server daemon (10.0.0.1:58080). Nov 4 05:06:27.245483 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 58080 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:27.247589 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:27.252830 systemd-logind[1594]: New session 14 of user core. Nov 4 05:06:27.265184 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 05:06:27.355379 sshd[5111]: Connection closed by 10.0.0.1 port 58080 Nov 4 05:06:27.355718 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:27.360178 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:58080.service: Deactivated successfully. Nov 4 05:06:27.362498 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 05:06:27.364158 systemd-logind[1594]: Session 14 logged out. Waiting for processes to exit. Nov 4 05:06:27.366061 systemd-logind[1594]: Removed session 14. Nov 4 05:06:28.267025 kubelet[2773]: E1104 05:06:28.266924 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" podUID="14f85a0a-9477-48d8-aa74-67ae5a309440" Nov 4 05:06:29.266770 kubelet[2773]: E1104 05:06:29.266716 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" podUID="a34c144d-d4e5-45d0-a3e5-87f853e234f9" Nov 4 05:06:29.741415 kubelet[2773]: E1104 05:06:29.741335 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:30.267555 kubelet[2773]: E1104 05:06:30.267482 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-r6ddg" podUID="edca77af-e24f-4ad2-ba80-576707a67fed" Nov 4 05:06:31.268372 kubelet[2773]: E1104 05:06:31.268314 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" podUID="8b185d97-46d2-4bf3-a4dc-561af0c44ee9" Nov 4 05:06:32.267737 kubelet[2773]: E1104 05:06:32.267312 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:32.268213 kubelet[2773]: E1104 05:06:32.268029 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" podUID="a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a" Nov 4 05:06:32.268472 kubelet[2773]: E1104 05:06:32.268430 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:06:32.374912 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:58088.service - OpenSSH per-connection server daemon (10.0.0.1:58088). Nov 4 05:06:32.438353 sshd[5151]: Accepted publickey for core from 10.0.0.1 port 58088 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:32.440827 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:32.446871 systemd-logind[1594]: New session 15 of user core. Nov 4 05:06:32.457343 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 05:06:32.555934 sshd[5154]: Connection closed by 10.0.0.1 port 58088 Nov 4 05:06:32.556304 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:32.560787 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:58088.service: Deactivated successfully. Nov 4 05:06:32.563561 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 05:06:32.566105 systemd-logind[1594]: Session 15 logged out. Waiting for processes to exit. Nov 4 05:06:32.568438 systemd-logind[1594]: Removed session 15. Nov 4 05:06:37.572939 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:36466.service - OpenSSH per-connection server daemon (10.0.0.1:36466). Nov 4 05:06:37.637641 sshd[5168]: Accepted publickey for core from 10.0.0.1 port 36466 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:37.639880 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:37.645583 systemd-logind[1594]: New session 16 of user core. Nov 4 05:06:37.657226 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 05:06:37.745017 sshd[5171]: Connection closed by 10.0.0.1 port 36466 Nov 4 05:06:37.745376 sshd-session[5168]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:37.751572 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:36466.service: Deactivated successfully. Nov 4 05:06:37.754046 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 05:06:37.755239 systemd-logind[1594]: Session 16 logged out. Waiting for processes to exit. Nov 4 05:06:37.756765 systemd-logind[1594]: Removed session 16. Nov 4 05:06:39.267715 containerd[1621]: time="2025-11-04T05:06:39.267640413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 05:06:39.675645 containerd[1621]: time="2025-11-04T05:06:39.675566877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:39.718661 containerd[1621]: time="2025-11-04T05:06:39.718596889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 05:06:39.718761 containerd[1621]: time="2025-11-04T05:06:39.718603492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:39.718938 kubelet[2773]: E1104 05:06:39.718886 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 05:06:39.719362 kubelet[2773]: E1104 05:06:39.718946 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 05:06:39.719362 kubelet[2773]: E1104 05:06:39.719090 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-d98494775-rq6s9_calico-system(938996c3-4ddf-4544-932d-7cc7b7f765d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:39.720382 containerd[1621]: time="2025-11-04T05:06:39.720347730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 05:06:40.078722 containerd[1621]: time="2025-11-04T05:06:40.078641846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:40.129712 containerd[1621]: time="2025-11-04T05:06:40.129591915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 05:06:40.129887 containerd[1621]: time="2025-11-04T05:06:40.129656558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:40.129997 kubelet[2773]: E1104 05:06:40.129904 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 05:06:40.130098 kubelet[2773]: E1104 05:06:40.130007 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 05:06:40.130159 kubelet[2773]: E1104 05:06:40.130109 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-d98494775-rq6s9_calico-system(938996c3-4ddf-4544-932d-7cc7b7f765d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:40.130196 kubelet[2773]: E1104 05:06:40.130157 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d98494775-rq6s9" podUID="938996c3-4ddf-4544-932d-7cc7b7f765d9" Nov 4 05:06:42.267812 containerd[1621]: time="2025-11-04T05:06:42.267757625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 05:06:42.641228 containerd[1621]: time="2025-11-04T05:06:42.641142461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:42.759315 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:36498.service - OpenSSH per-connection server daemon (10.0.0.1:36498). Nov 4 05:06:42.783948 containerd[1621]: time="2025-11-04T05:06:42.783877943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 05:06:42.784035 containerd[1621]: time="2025-11-04T05:06:42.783942005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:42.784275 kubelet[2773]: E1104 05:06:42.784214 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:42.784275 kubelet[2773]: E1104 05:06:42.784276 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:42.784699 kubelet[2773]: E1104 05:06:42.784501 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b7d776774-g6zkv_calico-apiserver(8b185d97-46d2-4bf3-a4dc-561af0c44ee9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:42.784699 kubelet[2773]: E1104 05:06:42.784554 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" podUID="8b185d97-46d2-4bf3-a4dc-561af0c44ee9" Nov 4 05:06:42.814356 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 36498 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:42.817115 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:42.822781 systemd-logind[1594]: New session 17 of user core. Nov 4 05:06:42.832183 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 05:06:43.007084 sshd[5196]: Connection closed by 10.0.0.1 port 36498 Nov 4 05:06:43.008205 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:43.016041 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:36498.service: Deactivated successfully. Nov 4 05:06:43.018363 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 05:06:43.019178 systemd-logind[1594]: Session 17 logged out. Waiting for processes to exit. Nov 4 05:06:43.020372 systemd-logind[1594]: Removed session 17. Nov 4 05:06:43.271387 containerd[1621]: time="2025-11-04T05:06:43.271243514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 05:06:43.757390 containerd[1621]: time="2025-11-04T05:06:43.757331208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:43.810231 containerd[1621]: time="2025-11-04T05:06:43.810130035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 05:06:43.810396 containerd[1621]: time="2025-11-04T05:06:43.810167676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:43.810556 kubelet[2773]: E1104 05:06:43.810489 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:43.810556 kubelet[2773]: E1104 05:06:43.810550 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:43.811045 kubelet[2773]: E1104 05:06:43.810727 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b7d776774-vjdtv_calico-apiserver(14f85a0a-9477-48d8-aa74-67ae5a309440): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:43.811045 kubelet[2773]: E1104 05:06:43.810783 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" podUID="14f85a0a-9477-48d8-aa74-67ae5a309440" Nov 4 05:06:43.811151 containerd[1621]: time="2025-11-04T05:06:43.811125131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 05:06:44.257667 containerd[1621]: time="2025-11-04T05:06:44.257589154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:44.266881 kubelet[2773]: E1104 05:06:44.266841 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:44.277305 containerd[1621]: time="2025-11-04T05:06:44.277231740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 05:06:44.277888 containerd[1621]: time="2025-11-04T05:06:44.277333352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:44.277938 kubelet[2773]: E1104 05:06:44.277529 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:44.277938 kubelet[2773]: E1104 05:06:44.277588 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 05:06:44.277938 kubelet[2773]: E1104 05:06:44.277823 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6864f4c9b8-bxhc8_calico-apiserver(a34c144d-d4e5-45d0-a3e5-87f853e234f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:44.277938 kubelet[2773]: E1104 05:06:44.277876 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" podUID="a34c144d-d4e5-45d0-a3e5-87f853e234f9" Nov 4 05:06:44.278190 containerd[1621]: time="2025-11-04T05:06:44.278133217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 05:06:45.274608 containerd[1621]: time="2025-11-04T05:06:45.274553477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:45.276699 containerd[1621]: time="2025-11-04T05:06:45.276561942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 05:06:45.276699 containerd[1621]: time="2025-11-04T05:06:45.276605413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:45.276943 kubelet[2773]: E1104 05:06:45.276873 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 05:06:45.276943 kubelet[2773]: E1104 05:06:45.276941 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 05:06:45.277396 kubelet[2773]: E1104 05:06:45.277222 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-m9ml2_calico-system(91e262bf-e00e-40d5-b480-4f480c906f2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:45.277627 containerd[1621]: time="2025-11-04T05:06:45.277580771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 05:06:45.609175 containerd[1621]: time="2025-11-04T05:06:45.609016504Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:45.720698 containerd[1621]: time="2025-11-04T05:06:45.720629974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:45.720881 containerd[1621]: time="2025-11-04T05:06:45.720729393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 05:06:45.721082 kubelet[2773]: E1104 05:06:45.721016 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 05:06:45.721082 kubelet[2773]: E1104 05:06:45.721077 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 05:06:45.721358 kubelet[2773]: E1104 05:06:45.721322 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-r6ddg_calico-system(edca77af-e24f-4ad2-ba80-576707a67fed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:45.721441 kubelet[2773]: E1104 05:06:45.721372 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-r6ddg" podUID="edca77af-e24f-4ad2-ba80-576707a67fed" Nov 4 05:06:45.721509 containerd[1621]: time="2025-11-04T05:06:45.721436994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 05:06:46.085811 containerd[1621]: time="2025-11-04T05:06:46.085725316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:46.087168 containerd[1621]: time="2025-11-04T05:06:46.087113124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 05:06:46.087422 containerd[1621]: time="2025-11-04T05:06:46.087189999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:46.087701 kubelet[2773]: E1104 05:06:46.087641 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 05:06:46.087701 kubelet[2773]: E1104 05:06:46.087705 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 05:06:46.087947 kubelet[2773]: E1104 05:06:46.087814 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-m9ml2_calico-system(91e262bf-e00e-40d5-b480-4f480c906f2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:46.087947 kubelet[2773]: E1104 05:06:46.087864 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:06:46.268080 containerd[1621]: time="2025-11-04T05:06:46.268023192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 05:06:46.665901 containerd[1621]: time="2025-11-04T05:06:46.665815559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 05:06:46.729486 containerd[1621]: time="2025-11-04T05:06:46.729399267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 4 05:06:46.729699 containerd[1621]: time="2025-11-04T05:06:46.729435265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 05:06:46.729846 kubelet[2773]: E1104 05:06:46.729793 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 05:06:46.730253 kubelet[2773]: E1104 05:06:46.729859 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 05:06:46.730253 kubelet[2773]: E1104 05:06:46.729987 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-699c5ddd64-52vk4_calico-system(a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 05:06:46.730253 kubelet[2773]: E1104 05:06:46.730029 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" podUID="a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a" Nov 4 05:06:48.026383 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:53640.service - OpenSSH per-connection server daemon (10.0.0.1:53640). Nov 4 05:06:48.122709 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 53640 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:48.124792 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:48.130366 systemd-logind[1594]: New session 18 of user core. Nov 4 05:06:48.139304 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 05:06:48.228079 sshd[5215]: Connection closed by 10.0.0.1 port 53640 Nov 4 05:06:48.228426 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:48.240082 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:53640.service: Deactivated successfully. Nov 4 05:06:48.242833 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 05:06:48.243923 systemd-logind[1594]: Session 18 logged out. Waiting for processes to exit. Nov 4 05:06:48.247567 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:53650.service - OpenSSH per-connection server daemon (10.0.0.1:53650). Nov 4 05:06:48.248954 systemd-logind[1594]: Removed session 18. Nov 4 05:06:48.267126 kubelet[2773]: E1104 05:06:48.267083 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:48.312014 sshd[5228]: Accepted publickey for core from 10.0.0.1 port 53650 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:48.314055 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:48.320132 systemd-logind[1594]: New session 19 of user core. Nov 4 05:06:48.330198 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 05:06:48.749706 sshd[5231]: Connection closed by 10.0.0.1 port 53650 Nov 4 05:06:48.749858 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:48.764387 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:53650.service: Deactivated successfully. Nov 4 05:06:48.766891 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 05:06:48.768348 systemd-logind[1594]: Session 19 logged out. Waiting for processes to exit. Nov 4 05:06:48.772453 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:53658.service - OpenSSH per-connection server daemon (10.0.0.1:53658). Nov 4 05:06:48.773456 systemd-logind[1594]: Removed session 19. Nov 4 05:06:48.843259 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 53658 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:48.845328 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:48.850434 systemd-logind[1594]: New session 20 of user core. Nov 4 05:06:48.868281 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 05:06:49.613605 sshd[5246]: Connection closed by 10.0.0.1 port 53658 Nov 4 05:06:49.614288 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:49.624508 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:53658.service: Deactivated successfully. Nov 4 05:06:49.627224 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 05:06:49.628089 systemd-logind[1594]: Session 20 logged out. Waiting for processes to exit. Nov 4 05:06:49.632405 systemd[1]: Started sshd@20-10.0.0.124:22-10.0.0.1:53666.service - OpenSSH per-connection server daemon (10.0.0.1:53666). Nov 4 05:06:49.633284 systemd-logind[1594]: Removed session 20. Nov 4 05:06:49.697257 sshd[5264]: Accepted publickey for core from 10.0.0.1 port 53666 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:49.699024 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:49.703870 systemd-logind[1594]: New session 21 of user core. Nov 4 05:06:49.721277 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 05:06:50.267090 kubelet[2773]: E1104 05:06:50.267035 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:06:50.676300 sshd[5267]: Connection closed by 10.0.0.1 port 53666 Nov 4 05:06:50.676594 sshd-session[5264]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:50.686151 systemd[1]: sshd@20-10.0.0.124:22-10.0.0.1:53666.service: Deactivated successfully. Nov 4 05:06:50.688450 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 05:06:50.689534 systemd-logind[1594]: Session 21 logged out. Waiting for processes to exit. Nov 4 05:06:50.693095 systemd[1]: Started sshd@21-10.0.0.124:22-10.0.0.1:53674.service - OpenSSH per-connection server daemon (10.0.0.1:53674). Nov 4 05:06:50.693881 systemd-logind[1594]: Removed session 21. Nov 4 05:06:50.751377 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 53674 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:50.753316 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:50.758215 systemd-logind[1594]: New session 22 of user core. Nov 4 05:06:50.766136 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 05:06:51.449231 sshd[5286]: Connection closed by 10.0.0.1 port 53674 Nov 4 05:06:51.449542 sshd-session[5283]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:51.453737 systemd[1]: sshd@21-10.0.0.124:22-10.0.0.1:53674.service: Deactivated successfully. Nov 4 05:06:51.455871 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 05:06:51.456738 systemd-logind[1594]: Session 22 logged out. Waiting for processes to exit. Nov 4 05:06:51.457934 systemd-logind[1594]: Removed session 22. Nov 4 05:06:54.268717 kubelet[2773]: E1104 05:06:54.268635 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d98494775-rq6s9" podUID="938996c3-4ddf-4544-932d-7cc7b7f765d9" Nov 4 05:06:56.467215 systemd[1]: Started sshd@22-10.0.0.124:22-10.0.0.1:52824.service - OpenSSH per-connection server daemon (10.0.0.1:52824). Nov 4 05:06:56.527015 sshd[5300]: Accepted publickey for core from 10.0.0.1 port 52824 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:06:56.528868 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:06:56.533802 systemd-logind[1594]: New session 23 of user core. Nov 4 05:06:56.541329 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 05:06:56.627100 sshd[5303]: Connection closed by 10.0.0.1 port 52824 Nov 4 05:06:56.629276 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Nov 4 05:06:56.633294 systemd[1]: sshd@22-10.0.0.124:22-10.0.0.1:52824.service: Deactivated successfully. Nov 4 05:06:56.635826 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 05:06:56.637464 systemd-logind[1594]: Session 23 logged out. Waiting for processes to exit. Nov 4 05:06:56.639174 systemd-logind[1594]: Removed session 23. Nov 4 05:06:57.268324 kubelet[2773]: E1104 05:06:57.268254 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" podUID="a34c144d-d4e5-45d0-a3e5-87f853e234f9" Nov 4 05:06:58.267988 kubelet[2773]: E1104 05:06:58.267453 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" podUID="8b185d97-46d2-4bf3-a4dc-561af0c44ee9" Nov 4 05:06:59.268225 kubelet[2773]: E1104 05:06:59.268173 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" podUID="14f85a0a-9477-48d8-aa74-67ae5a309440" Nov 4 05:07:00.268664 kubelet[2773]: E1104 05:07:00.268595 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:07:01.268588 kubelet[2773]: E1104 05:07:01.268526 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" podUID="a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a" Nov 4 05:07:01.268889 kubelet[2773]: E1104 05:07:01.268864 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-r6ddg" podUID="edca77af-e24f-4ad2-ba80-576707a67fed" Nov 4 05:07:01.645013 systemd[1]: Started sshd@23-10.0.0.124:22-10.0.0.1:52902.service - OpenSSH per-connection server daemon (10.0.0.1:52902). Nov 4 05:07:01.720127 sshd[5340]: Accepted publickey for core from 10.0.0.1 port 52902 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:07:01.722163 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:07:01.727210 systemd-logind[1594]: New session 24 of user core. Nov 4 05:07:01.737212 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 05:07:01.927671 sshd[5343]: Connection closed by 10.0.0.1 port 52902 Nov 4 05:07:01.927975 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Nov 4 05:07:01.932613 systemd[1]: sshd@23-10.0.0.124:22-10.0.0.1:52902.service: Deactivated successfully. Nov 4 05:07:01.935033 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 05:07:01.935849 systemd-logind[1594]: Session 24 logged out. Waiting for processes to exit. Nov 4 05:07:01.937058 systemd-logind[1594]: Removed session 24. Nov 4 05:07:06.268375 kubelet[2773]: E1104 05:07:06.268290 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-d98494775-rq6s9" podUID="938996c3-4ddf-4544-932d-7cc7b7f765d9" Nov 4 05:07:06.942185 systemd[1]: Started sshd@24-10.0.0.124:22-10.0.0.1:53402.service - OpenSSH per-connection server daemon (10.0.0.1:53402). Nov 4 05:07:07.009159 sshd[5359]: Accepted publickey for core from 10.0.0.1 port 53402 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:07:07.008922 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:07:07.021900 systemd-logind[1594]: New session 25 of user core. Nov 4 05:07:07.026490 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 05:07:07.166882 sshd[5362]: Connection closed by 10.0.0.1 port 53402 Nov 4 05:07:07.167229 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Nov 4 05:07:07.172420 systemd[1]: sshd@24-10.0.0.124:22-10.0.0.1:53402.service: Deactivated successfully. Nov 4 05:07:07.175286 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 05:07:07.176265 systemd-logind[1594]: Session 25 logged out. Waiting for processes to exit. Nov 4 05:07:07.178441 systemd-logind[1594]: Removed session 25. Nov 4 05:07:07.266692 kubelet[2773]: E1104 05:07:07.266530 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:07:08.266617 kubelet[2773]: E1104 05:07:08.266561 2773 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 05:07:10.267934 kubelet[2773]: E1104 05:07:10.267708 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-g6zkv" podUID="8b185d97-46d2-4bf3-a4dc-561af0c44ee9" Nov 4 05:07:10.268727 kubelet[2773]: E1104 05:07:10.268340 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6864f4c9b8-bxhc8" podUID="a34c144d-d4e5-45d0-a3e5-87f853e234f9" Nov 4 05:07:12.184981 systemd[1]: Started sshd@25-10.0.0.124:22-10.0.0.1:53436.service - OpenSSH per-connection server daemon (10.0.0.1:53436). Nov 4 05:07:12.252067 sshd[5376]: Accepted publickey for core from 10.0.0.1 port 53436 ssh2: RSA SHA256:ahXlKPynqdroRTMgGrryfNb23Atwptm9fcSPhMZaJok Nov 4 05:07:12.253403 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:07:12.259544 systemd-logind[1594]: New session 26 of user core. Nov 4 05:07:12.265131 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 05:07:12.268674 kubelet[2773]: E1104 05:07:12.268410 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-699c5ddd64-52vk4" podUID="a5f2eed7-c20b-4c5c-ba5f-390204bd1a8a" Nov 4 05:07:12.359003 sshd[5379]: Connection closed by 10.0.0.1 port 53436 Nov 4 05:07:12.358807 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Nov 4 05:07:12.364629 systemd-logind[1594]: Session 26 logged out. Waiting for processes to exit. Nov 4 05:07:12.365232 systemd[1]: sshd@25-10.0.0.124:22-10.0.0.1:53436.service: Deactivated successfully. Nov 4 05:07:12.367831 systemd[1]: session-26.scope: Deactivated successfully. Nov 4 05:07:12.370081 systemd-logind[1594]: Removed session 26. Nov 4 05:07:13.272634 kubelet[2773]: E1104 05:07:13.272575 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b7d776774-vjdtv" podUID="14f85a0a-9477-48d8-aa74-67ae5a309440" Nov 4 05:07:13.274380 kubelet[2773]: E1104 05:07:13.272699 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m9ml2" podUID="91e262bf-e00e-40d5-b480-4f480c906f2e" Nov 4 05:07:13.274380 kubelet[2773]: E1104 05:07:13.272782 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-r6ddg" podUID="edca77af-e24f-4ad2-ba80-576707a67fed"