Oct 27 08:18:34.419059 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 27 06:24:35 -00 2025 Oct 27 08:18:34.419086 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:18:34.419103 kernel: BIOS-provided physical RAM map: Oct 27 08:18:34.419110 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 27 08:18:34.419117 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 27 08:18:34.419124 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 27 08:18:34.419132 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 27 08:18:34.419139 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 27 08:18:34.419150 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 27 08:18:34.419165 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 27 08:18:34.419174 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 27 08:18:34.419181 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 27 08:18:34.419188 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 27 08:18:34.419195 kernel: NX (Execute Disable) protection: active Oct 27 08:18:34.419211 kernel: APIC: Static calls initialized Oct 27 08:18:34.419218 kernel: SMBIOS 2.8 present. Oct 27 08:18:34.419229 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 27 08:18:34.419237 kernel: DMI: Memory slots populated: 1/1 Oct 27 08:18:34.419244 kernel: Hypervisor detected: KVM Oct 27 08:18:34.419252 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 27 08:18:34.419259 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 27 08:18:34.419267 kernel: kvm-clock: using sched offset of 3868812775 cycles Oct 27 08:18:34.419276 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 27 08:18:34.419291 kernel: tsc: Detected 2794.748 MHz processor Oct 27 08:18:34.419299 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 27 08:18:34.419307 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 27 08:18:34.419315 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 27 08:18:34.419323 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 27 08:18:34.419331 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 27 08:18:34.419339 kernel: Using GB pages for direct mapping Oct 27 08:18:34.419347 kernel: ACPI: Early table checksum verification disabled Oct 27 08:18:34.419362 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 27 08:18:34.419370 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:18:34.419378 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:18:34.419386 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:18:34.419393 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 27 08:18:34.419401 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:18:34.419409 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:18:34.419424 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:18:34.419432 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:18:34.419447 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 27 08:18:34.419455 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 27 08:18:34.419464 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 27 08:18:34.419478 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 27 08:18:34.419486 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 27 08:18:34.419494 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 27 08:18:34.419502 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 27 08:18:34.419510 kernel: No NUMA configuration found Oct 27 08:18:34.419518 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 27 08:18:34.419533 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 27 08:18:34.419541 kernel: Zone ranges: Oct 27 08:18:34.419549 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 27 08:18:34.419557 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 27 08:18:34.419565 kernel: Normal empty Oct 27 08:18:34.419573 kernel: Device empty Oct 27 08:18:34.419582 kernel: Movable zone start for each node Oct 27 08:18:34.419590 kernel: Early memory node ranges Oct 27 08:18:34.419604 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 27 08:18:34.419612 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 27 08:18:34.419620 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 27 08:18:34.419628 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 27 08:18:34.419636 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 27 08:18:34.419645 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 27 08:18:34.419655 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 27 08:18:34.419670 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 27 08:18:34.419678 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 27 08:18:34.419686 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 27 08:18:34.419696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 27 08:18:34.419705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 27 08:18:34.419713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 27 08:18:34.419721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 27 08:18:34.419747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 27 08:18:34.419755 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 27 08:18:34.419763 kernel: TSC deadline timer available Oct 27 08:18:34.419772 kernel: CPU topo: Max. logical packages: 1 Oct 27 08:18:34.419780 kernel: CPU topo: Max. logical dies: 1 Oct 27 08:18:34.419788 kernel: CPU topo: Max. dies per package: 1 Oct 27 08:18:34.419796 kernel: CPU topo: Max. threads per core: 1 Oct 27 08:18:34.419804 kernel: CPU topo: Num. cores per package: 4 Oct 27 08:18:34.419819 kernel: CPU topo: Num. threads per package: 4 Oct 27 08:18:34.419828 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 27 08:18:34.419836 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 27 08:18:34.419844 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 27 08:18:34.419852 kernel: kvm-guest: setup PV sched yield Oct 27 08:18:34.419874 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 27 08:18:34.419882 kernel: Booting paravirtualized kernel on KVM Oct 27 08:18:34.419898 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 27 08:18:34.419906 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 27 08:18:34.419915 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 27 08:18:34.419923 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 27 08:18:34.419931 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 27 08:18:34.419939 kernel: kvm-guest: PV spinlocks enabled Oct 27 08:18:34.419947 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 27 08:18:34.419963 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:18:34.419971 kernel: random: crng init done Oct 27 08:18:34.419980 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 27 08:18:34.419988 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 08:18:34.419996 kernel: Fallback order for Node 0: 0 Oct 27 08:18:34.420004 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 27 08:18:34.420012 kernel: Policy zone: DMA32 Oct 27 08:18:34.420027 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 08:18:34.420036 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 27 08:18:34.420044 kernel: ftrace: allocating 40092 entries in 157 pages Oct 27 08:18:34.420052 kernel: ftrace: allocated 157 pages with 5 groups Oct 27 08:18:34.420060 kernel: Dynamic Preempt: voluntary Oct 27 08:18:34.420068 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 08:18:34.420077 kernel: rcu: RCU event tracing is enabled. Oct 27 08:18:34.420091 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 27 08:18:34.420100 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 08:18:34.420110 kernel: Rude variant of Tasks RCU enabled. Oct 27 08:18:34.420118 kernel: Tracing variant of Tasks RCU enabled. Oct 27 08:18:34.420126 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 08:18:34.420134 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 27 08:18:34.420142 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:18:34.420151 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:18:34.420166 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 08:18:34.420174 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 27 08:18:34.420182 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 08:18:34.420209 kernel: Console: colour VGA+ 80x25 Oct 27 08:18:34.420223 kernel: printk: legacy console [ttyS0] enabled Oct 27 08:18:34.420232 kernel: ACPI: Core revision 20240827 Oct 27 08:18:34.420240 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 27 08:18:34.420249 kernel: APIC: Switch to symmetric I/O mode setup Oct 27 08:18:34.420257 kernel: x2apic enabled Oct 27 08:18:34.420273 kernel: APIC: Switched APIC routing to: physical x2apic Oct 27 08:18:34.420284 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 27 08:18:34.420292 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 27 08:18:34.420301 kernel: kvm-guest: setup PV IPIs Oct 27 08:18:34.420316 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 27 08:18:34.420324 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 27 08:18:34.420333 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 27 08:18:34.420342 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 27 08:18:34.420350 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 27 08:18:34.420359 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 27 08:18:34.420367 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 27 08:18:34.420382 kernel: Spectre V2 : Mitigation: Retpolines Oct 27 08:18:34.420391 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 27 08:18:34.420399 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 27 08:18:34.420408 kernel: active return thunk: retbleed_return_thunk Oct 27 08:18:34.420416 kernel: RETBleed: Mitigation: untrained return thunk Oct 27 08:18:34.420425 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 27 08:18:34.420433 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 27 08:18:34.420452 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 27 08:18:34.420462 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 27 08:18:34.420470 kernel: active return thunk: srso_return_thunk Oct 27 08:18:34.420479 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 27 08:18:34.420487 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 27 08:18:34.420496 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 27 08:18:34.420504 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 27 08:18:34.420520 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 27 08:18:34.420528 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 27 08:18:34.420537 kernel: Freeing SMP alternatives memory: 32K Oct 27 08:18:34.420545 kernel: pid_max: default: 32768 minimum: 301 Oct 27 08:18:34.420553 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 08:18:34.420562 kernel: landlock: Up and running. Oct 27 08:18:34.420570 kernel: SELinux: Initializing. Oct 27 08:18:34.420587 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 08:18:34.420596 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 08:18:34.420605 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 27 08:18:34.420613 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 27 08:18:34.420622 kernel: ... version: 0 Oct 27 08:18:34.420630 kernel: ... bit width: 48 Oct 27 08:18:34.420638 kernel: ... generic registers: 6 Oct 27 08:18:34.420653 kernel: ... value mask: 0000ffffffffffff Oct 27 08:18:34.420662 kernel: ... max period: 00007fffffffffff Oct 27 08:18:34.420670 kernel: ... fixed-purpose events: 0 Oct 27 08:18:34.420678 kernel: ... event mask: 000000000000003f Oct 27 08:18:34.420687 kernel: signal: max sigframe size: 1776 Oct 27 08:18:34.420695 kernel: rcu: Hierarchical SRCU implementation. Oct 27 08:18:34.420704 kernel: rcu: Max phase no-delay instances is 400. Oct 27 08:18:34.420719 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 27 08:18:34.420727 kernel: smp: Bringing up secondary CPUs ... Oct 27 08:18:34.420743 kernel: smpboot: x86: Booting SMP configuration: Oct 27 08:18:34.420751 kernel: .... node #0, CPUs: #1 #2 #3 Oct 27 08:18:34.420760 kernel: smp: Brought up 1 node, 4 CPUs Oct 27 08:18:34.420768 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 27 08:18:34.420777 kernel: Memory: 2451436K/2571752K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114376K reserved, 0K cma-reserved) Oct 27 08:18:34.420793 kernel: devtmpfs: initialized Oct 27 08:18:34.420801 kernel: x86/mm: Memory block size: 128MB Oct 27 08:18:34.420810 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 08:18:34.420819 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 27 08:18:34.420827 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 08:18:34.420838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 08:18:34.420846 kernel: audit: initializing netlink subsys (disabled) Oct 27 08:18:34.420882 kernel: audit: type=2000 audit(1761553111.237:1): state=initialized audit_enabled=0 res=1 Oct 27 08:18:34.420891 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 08:18:34.420899 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 27 08:18:34.420908 kernel: cpuidle: using governor menu Oct 27 08:18:34.420916 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 08:18:34.420924 kernel: dca service started, version 1.12.1 Oct 27 08:18:34.420933 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 27 08:18:34.420949 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 27 08:18:34.420957 kernel: PCI: Using configuration type 1 for base access Oct 27 08:18:34.420966 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 27 08:18:34.420974 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 08:18:34.420983 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 08:18:34.420991 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 08:18:34.421000 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 08:18:34.421015 kernel: ACPI: Added _OSI(Module Device) Oct 27 08:18:34.421023 kernel: ACPI: Added _OSI(Processor Device) Oct 27 08:18:34.421031 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 08:18:34.421040 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 08:18:34.421048 kernel: ACPI: Interpreter enabled Oct 27 08:18:34.421056 kernel: ACPI: PM: (supports S0 S3 S5) Oct 27 08:18:34.421065 kernel: ACPI: Using IOAPIC for interrupt routing Oct 27 08:18:34.421073 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 27 08:18:34.421089 kernel: PCI: Using E820 reservations for host bridge windows Oct 27 08:18:34.421097 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 27 08:18:34.421105 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 08:18:34.421357 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 08:18:34.421540 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 27 08:18:34.421745 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 27 08:18:34.421757 kernel: PCI host bridge to bus 0000:00 Oct 27 08:18:34.421954 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 27 08:18:34.422118 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 27 08:18:34.422279 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 27 08:18:34.422437 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 27 08:18:34.422611 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 27 08:18:34.422780 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 27 08:18:34.422966 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 08:18:34.423162 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 27 08:18:34.423349 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 27 08:18:34.423539 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 27 08:18:34.423727 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 27 08:18:34.423932 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 27 08:18:34.424108 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 27 08:18:34.424293 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 27 08:18:34.424471 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 27 08:18:34.424666 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 27 08:18:34.424854 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 27 08:18:34.425059 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 27 08:18:34.425236 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 27 08:18:34.425411 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 27 08:18:34.425585 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 27 08:18:34.425795 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 27 08:18:34.425989 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 27 08:18:34.426165 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 27 08:18:34.426336 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 27 08:18:34.426509 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 27 08:18:34.426691 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 27 08:18:34.426909 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 27 08:18:34.427094 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 27 08:18:34.427269 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 27 08:18:34.427444 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 27 08:18:34.427627 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 27 08:18:34.427825 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 27 08:18:34.427837 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 27 08:18:34.427846 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 27 08:18:34.427854 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 27 08:18:34.427882 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 27 08:18:34.427891 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 27 08:18:34.427900 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 27 08:18:34.427919 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 27 08:18:34.427928 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 27 08:18:34.427936 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 27 08:18:34.427945 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 27 08:18:34.427953 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 27 08:18:34.427962 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 27 08:18:34.427970 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 27 08:18:34.427985 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 27 08:18:34.427994 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 27 08:18:34.428002 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 27 08:18:34.428010 kernel: iommu: Default domain type: Translated Oct 27 08:18:34.428019 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 27 08:18:34.428027 kernel: PCI: Using ACPI for IRQ routing Oct 27 08:18:34.428036 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 27 08:18:34.428051 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 27 08:18:34.428060 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 27 08:18:34.428237 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 27 08:18:34.428409 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 27 08:18:34.428579 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 27 08:18:34.428590 kernel: vgaarb: loaded Oct 27 08:18:34.428599 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 27 08:18:34.428619 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 27 08:18:34.428627 kernel: clocksource: Switched to clocksource kvm-clock Oct 27 08:18:34.428635 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 08:18:34.428644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 08:18:34.428653 kernel: pnp: PnP ACPI init Oct 27 08:18:34.428852 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 27 08:18:34.428894 kernel: pnp: PnP ACPI: found 6 devices Oct 27 08:18:34.428903 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 27 08:18:34.428912 kernel: NET: Registered PF_INET protocol family Oct 27 08:18:34.428920 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 27 08:18:34.428929 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 27 08:18:34.428938 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 08:18:34.428946 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 08:18:34.428963 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 27 08:18:34.428971 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 27 08:18:34.428980 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 08:18:34.428989 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 08:18:34.428997 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 08:18:34.429006 kernel: NET: Registered PF_XDP protocol family Oct 27 08:18:34.429172 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 27 08:18:34.429345 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 27 08:18:34.429505 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 27 08:18:34.429665 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 27 08:18:34.429837 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 27 08:18:34.430013 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 27 08:18:34.430026 kernel: PCI: CLS 0 bytes, default 64 Oct 27 08:18:34.430035 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 27 08:18:34.430056 kernel: Initialise system trusted keyrings Oct 27 08:18:34.430065 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 27 08:18:34.430073 kernel: Key type asymmetric registered Oct 27 08:18:34.430082 kernel: Asymmetric key parser 'x509' registered Oct 27 08:18:34.430090 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 08:18:34.430099 kernel: io scheduler mq-deadline registered Oct 27 08:18:34.430107 kernel: io scheduler kyber registered Oct 27 08:18:34.430123 kernel: io scheduler bfq registered Oct 27 08:18:34.430132 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 27 08:18:34.430141 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 27 08:18:34.430149 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 27 08:18:34.430158 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 27 08:18:34.430167 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 08:18:34.430176 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 27 08:18:34.430191 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 27 08:18:34.430199 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 27 08:18:34.430208 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 27 08:18:34.430216 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 27 08:18:34.430398 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 27 08:18:34.430568 kernel: rtc_cmos 00:04: registered as rtc0 Oct 27 08:18:34.430742 kernel: rtc_cmos 00:04: setting system clock to 2025-10-27T08:18:32 UTC (1761553112) Oct 27 08:18:34.430958 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 27 08:18:34.430971 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 27 08:18:34.430980 kernel: NET: Registered PF_INET6 protocol family Oct 27 08:18:34.430988 kernel: Segment Routing with IPv6 Oct 27 08:18:34.430997 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 08:18:34.431005 kernel: NET: Registered PF_PACKET protocol family Oct 27 08:18:34.431025 kernel: Key type dns_resolver registered Oct 27 08:18:34.431033 kernel: IPI shorthand broadcast: enabled Oct 27 08:18:34.431042 kernel: sched_clock: Marking stable (1701002745, 200513487)->(1955094811, -53578579) Oct 27 08:18:34.431051 kernel: registered taskstats version 1 Oct 27 08:18:34.431059 kernel: Loading compiled-in X.509 certificates Oct 27 08:18:34.431068 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 6c7ef547b8d769f7afd2708799fb9c3145695bfb' Oct 27 08:18:34.431076 kernel: Demotion targets for Node 0: null Oct 27 08:18:34.431085 kernel: Key type .fscrypt registered Oct 27 08:18:34.431101 kernel: Key type fscrypt-provisioning registered Oct 27 08:18:34.431109 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 08:18:34.431117 kernel: ima: Allocated hash algorithm: sha1 Oct 27 08:18:34.431126 kernel: ima: No architecture policies found Oct 27 08:18:34.431134 kernel: clk: Disabling unused clocks Oct 27 08:18:34.431143 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 27 08:18:34.431152 kernel: Write protecting the kernel read-only data: 40960k Oct 27 08:18:34.431172 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 27 08:18:34.431181 kernel: Run /init as init process Oct 27 08:18:34.431189 kernel: with arguments: Oct 27 08:18:34.431198 kernel: /init Oct 27 08:18:34.431206 kernel: with environment: Oct 27 08:18:34.431215 kernel: HOME=/ Oct 27 08:18:34.431223 kernel: TERM=linux Oct 27 08:18:34.431238 kernel: SCSI subsystem initialized Oct 27 08:18:34.431246 kernel: libata version 3.00 loaded. Oct 27 08:18:34.431425 kernel: ahci 0000:00:1f.2: version 3.0 Oct 27 08:18:34.431495 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 27 08:18:34.431671 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 27 08:18:34.431876 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 27 08:18:34.432078 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 27 08:18:34.432276 kernel: scsi host0: ahci Oct 27 08:18:34.432464 kernel: scsi host1: ahci Oct 27 08:18:34.432650 kernel: scsi host2: ahci Oct 27 08:18:34.432849 kernel: scsi host3: ahci Oct 27 08:18:34.433055 kernel: scsi host4: ahci Oct 27 08:18:34.433258 kernel: scsi host5: ahci Oct 27 08:18:34.433271 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Oct 27 08:18:34.433280 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Oct 27 08:18:34.433289 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Oct 27 08:18:34.433298 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Oct 27 08:18:34.433307 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Oct 27 08:18:34.433326 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Oct 27 08:18:34.433335 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 27 08:18:34.433344 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 27 08:18:34.433353 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 27 08:18:34.433362 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 27 08:18:34.433370 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 27 08:18:34.433379 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 08:18:34.433395 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 27 08:18:34.433404 kernel: ata3.00: applying bridge limits Oct 27 08:18:34.433412 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 27 08:18:34.433421 kernel: ata3.00: LPM support broken, forcing max_power Oct 27 08:18:34.433430 kernel: ata3.00: configured for UDMA/100 Oct 27 08:18:34.433697 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 27 08:18:34.433974 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 27 08:18:34.434168 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 27 08:18:34.434182 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 08:18:34.434193 kernel: GPT:16515071 != 27000831 Oct 27 08:18:34.434203 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 08:18:34.434212 kernel: GPT:16515071 != 27000831 Oct 27 08:18:34.434221 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 08:18:34.434239 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 08:18:34.434249 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:18:34.434445 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 27 08:18:34.434457 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 27 08:18:34.434649 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 27 08:18:34.434661 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 08:18:34.434681 kernel: device-mapper: uevent: version 1.0.3 Oct 27 08:18:34.434690 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 08:18:34.434699 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 27 08:18:34.434715 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:18:34.434724 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:18:34.434745 kernel: raid6: avx2x4 gen() 29345 MB/s Oct 27 08:18:34.434754 kernel: raid6: avx2x2 gen() 30936 MB/s Oct 27 08:18:34.434763 kernel: raid6: avx2x1 gen() 25623 MB/s Oct 27 08:18:34.434772 kernel: raid6: using algorithm avx2x2 gen() 30936 MB/s Oct 27 08:18:34.434781 kernel: raid6: .... xor() 19317 MB/s, rmw enabled Oct 27 08:18:34.434791 kernel: raid6: using avx2x2 recovery algorithm Oct 27 08:18:34.434800 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:18:34.434809 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:18:34.434824 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:18:34.434833 kernel: xor: automatically using best checksumming function avx Oct 27 08:18:34.434842 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:18:34.434851 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 08:18:34.434877 kernel: BTRFS: device fsid bf514789-bcec-4c15-ac9d-e4c3d19a42b2 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (176) Oct 27 08:18:34.434886 kernel: BTRFS info (device dm-0): first mount of filesystem bf514789-bcec-4c15-ac9d-e4c3d19a42b2 Oct 27 08:18:34.434895 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:18:34.434912 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 08:18:34.434921 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 08:18:34.434930 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:18:34.434938 kernel: loop: module loaded Oct 27 08:18:34.434954 kernel: loop0: detected capacity change from 0 to 100120 Oct 27 08:18:34.434963 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 08:18:34.434973 systemd[1]: Successfully made /usr/ read-only. Oct 27 08:18:34.434992 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:18:34.435002 systemd[1]: Detected virtualization kvm. Oct 27 08:18:34.435012 systemd[1]: Detected architecture x86-64. Oct 27 08:18:34.435021 systemd[1]: Running in initrd. Oct 27 08:18:34.435030 systemd[1]: No hostname configured, using default hostname. Oct 27 08:18:34.435040 systemd[1]: Hostname set to . Oct 27 08:18:34.435056 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:18:34.435065 systemd[1]: Queued start job for default target initrd.target. Oct 27 08:18:34.435075 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:18:34.435097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:18:34.435107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:18:34.435127 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 08:18:34.435145 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:18:34.435156 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 08:18:34.435165 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 08:18:34.435175 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:18:34.435185 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:18:34.435194 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:18:34.435215 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:18:34.435224 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:18:34.435233 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:18:34.435243 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:18:34.435253 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:18:34.435262 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:18:34.435272 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 08:18:34.435289 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 08:18:34.435299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:18:34.435308 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:18:34.435318 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:18:34.435327 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:18:34.435337 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 08:18:34.435353 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 08:18:34.435363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:18:34.435372 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 08:18:34.435382 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 08:18:34.435392 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 08:18:34.435401 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:18:34.435411 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:18:34.435427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:18:34.435437 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 08:18:34.435486 systemd-journald[309]: Collecting audit messages is disabled. Oct 27 08:18:34.435516 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:18:34.435525 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 08:18:34.435535 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:18:34.435545 systemd-journald[309]: Journal started Oct 27 08:18:34.435572 systemd-journald[309]: Runtime Journal (/run/log/journal/9c6f4a9202764f1bb6f44e1e77cead0c) is 6M, max 48.3M, 42.2M free. Oct 27 08:18:34.439897 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:18:34.444145 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:18:34.459030 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:18:34.466021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:18:34.468197 systemd-tmpfiles[327]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 08:18:34.482065 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:18:34.562597 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 08:18:34.562650 kernel: Bridge firewalling registered Oct 27 08:18:34.485131 systemd-modules-load[312]: Inserted module 'br_netfilter' Oct 27 08:18:34.568275 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:18:34.572395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:18:34.578096 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 08:18:34.580729 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:18:34.585721 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:18:34.615844 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:18:34.617674 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:18:34.634088 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:18:34.640128 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 08:18:34.677693 dracut-cmdline[357]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:18:34.696225 systemd-resolved[348]: Positive Trust Anchors: Oct 27 08:18:34.696250 systemd-resolved[348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:18:34.696256 systemd-resolved[348]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:18:34.696296 systemd-resolved[348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:18:34.735913 systemd-resolved[348]: Defaulting to hostname 'linux'. Oct 27 08:18:34.738246 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:18:34.742121 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:18:34.825903 kernel: Loading iSCSI transport class v2.0-870. Oct 27 08:18:34.841897 kernel: iscsi: registered transport (tcp) Oct 27 08:18:34.872418 kernel: iscsi: registered transport (qla4xxx) Oct 27 08:18:34.872460 kernel: QLogic iSCSI HBA Driver Oct 27 08:18:34.911972 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:18:34.942802 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:18:34.943353 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:18:35.018146 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 08:18:35.020194 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 08:18:35.025123 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 08:18:35.100426 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:18:35.102407 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:18:35.135333 systemd-udevd[592]: Using default interface naming scheme 'v257'. Oct 27 08:18:35.149913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:18:35.155986 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 08:18:35.181061 dracut-pre-trigger[650]: rd.md=0: removing MD RAID activation Oct 27 08:18:35.206952 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:18:35.218089 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:18:35.219939 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:18:35.225261 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:18:35.274530 systemd-networkd[724]: lo: Link UP Oct 27 08:18:35.274539 systemd-networkd[724]: lo: Gained carrier Oct 27 08:18:35.275204 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:18:35.277560 systemd[1]: Reached target network.target - Network. Oct 27 08:18:35.313444 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:18:35.318757 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 08:18:35.354313 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 08:18:35.377596 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 08:18:35.404413 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 08:18:35.427890 kernel: cryptd: max_cpu_qlen set to 1000 Oct 27 08:18:35.437659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:18:35.439492 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 08:18:35.449118 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:18:35.449243 systemd-networkd[724]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:18:35.449951 systemd-networkd[724]: eth0: Link UP Oct 27 08:18:35.471282 kernel: AES CTR mode by8 optimization enabled Oct 27 08:18:35.450170 systemd-networkd[724]: eth0: Gained carrier Oct 27 08:18:35.450179 systemd-networkd[724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:18:35.483203 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 27 08:18:35.455681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:18:35.455824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:18:35.461118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:18:35.467337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:18:35.491581 disk-uuid[823]: Primary Header is updated. Oct 27 08:18:35.491581 disk-uuid[823]: Secondary Entries is updated. Oct 27 08:18:35.491581 disk-uuid[823]: Secondary Header is updated. Oct 27 08:18:35.477025 systemd-networkd[724]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 08:18:35.594449 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 08:18:35.621263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:18:35.626305 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:18:35.628461 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:18:35.634233 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:18:35.638917 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 08:18:35.665316 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:18:35.826412 systemd-resolved[348]: Detected conflict on linux IN A 10.0.0.35 Oct 27 08:18:35.826430 systemd-resolved[348]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Oct 27 08:18:36.547477 disk-uuid[835]: Warning: The kernel is still using the old partition table. Oct 27 08:18:36.547477 disk-uuid[835]: The new table will be used at the next reboot or after you Oct 27 08:18:36.547477 disk-uuid[835]: run partprobe(8) or kpartx(8) Oct 27 08:18:36.547477 disk-uuid[835]: The operation has completed successfully. Oct 27 08:18:36.566404 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 08:18:36.566557 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 08:18:36.570536 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 08:18:36.598157 systemd-networkd[724]: eth0: Gained IPv6LL Oct 27 08:18:36.612310 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (862) Oct 27 08:18:36.612395 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:18:36.612411 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:18:36.617720 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:18:36.617756 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:18:36.625914 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:18:36.627027 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 08:18:36.628322 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 08:18:37.497355 ignition[881]: Ignition 2.22.0 Oct 27 08:18:37.497371 ignition[881]: Stage: fetch-offline Oct 27 08:18:37.497434 ignition[881]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:18:37.497457 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:18:37.497562 ignition[881]: parsed url from cmdline: "" Oct 27 08:18:37.497567 ignition[881]: no config URL provided Oct 27 08:18:37.497572 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 08:18:37.497585 ignition[881]: no config at "/usr/lib/ignition/user.ign" Oct 27 08:18:37.497667 ignition[881]: op(1): [started] loading QEMU firmware config module Oct 27 08:18:37.497673 ignition[881]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 27 08:18:37.518985 ignition[881]: op(1): [finished] loading QEMU firmware config module Oct 27 08:18:37.601898 ignition[881]: parsing config with SHA512: b7330a6e36a08bfa89df306c718a394d2e63debcfc7da53f6c4655de6ad39e784122f512031e17346ef06482305d53bc92fb5f62c2efdbbd7c811424c9c4c692 Oct 27 08:18:37.610477 unknown[881]: fetched base config from "system" Oct 27 08:18:37.610492 unknown[881]: fetched user config from "qemu" Oct 27 08:18:37.610956 ignition[881]: fetch-offline: fetch-offline passed Oct 27 08:18:37.611023 ignition[881]: Ignition finished successfully Oct 27 08:18:37.614751 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:18:37.617071 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 08:18:37.618170 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 08:18:38.270384 ignition[892]: Ignition 2.22.0 Oct 27 08:18:38.270401 ignition[892]: Stage: kargs Oct 27 08:18:38.270656 ignition[892]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:18:38.270669 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:18:38.271604 ignition[892]: kargs: kargs passed Oct 27 08:18:38.271705 ignition[892]: Ignition finished successfully Oct 27 08:18:38.282802 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 08:18:38.286454 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 08:18:38.390336 ignition[900]: Ignition 2.22.0 Oct 27 08:18:38.390349 ignition[900]: Stage: disks Oct 27 08:18:38.390528 ignition[900]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:18:38.390539 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:18:38.391356 ignition[900]: disks: disks passed Oct 27 08:18:38.391407 ignition[900]: Ignition finished successfully Oct 27 08:18:38.402896 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 08:18:38.405161 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 08:18:38.408841 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 08:18:38.409252 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:18:38.410224 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:18:38.422258 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:18:38.427504 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 08:18:38.511708 systemd-fsck[911]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 27 08:18:38.520518 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 08:18:38.527138 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 08:18:38.786889 kernel: EXT4-fs (vda9): mounted filesystem e90e2fe3-e1db-4bff-abac-c8d1d032f674 r/w with ordered data mode. Quota mode: none. Oct 27 08:18:38.787124 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 08:18:38.789086 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 08:18:38.824682 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:18:38.827661 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 08:18:38.829750 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 08:18:38.829792 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 08:18:38.829819 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:18:38.848202 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 08:18:38.852592 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Oct 27 08:18:38.851250 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 08:18:38.861740 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:18:38.861778 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:18:38.861797 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:18:38.861812 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:18:38.862963 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:18:38.921498 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 08:18:38.926374 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Oct 27 08:18:38.971850 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 08:18:38.977909 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 08:18:39.083616 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 08:18:39.092421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 08:18:39.093452 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 08:18:39.112719 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 08:18:39.115092 kernel: BTRFS info (device vda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:18:39.130072 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 08:18:39.160081 ignition[1034]: INFO : Ignition 2.22.0 Oct 27 08:18:39.160081 ignition[1034]: INFO : Stage: mount Oct 27 08:18:39.163439 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:18:39.163439 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:18:39.163439 ignition[1034]: INFO : mount: mount passed Oct 27 08:18:39.163439 ignition[1034]: INFO : Ignition finished successfully Oct 27 08:18:39.163756 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 08:18:39.168623 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 08:18:39.789279 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:18:39.822405 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1046) Oct 27 08:18:39.822443 kernel: BTRFS info (device vda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:18:39.822455 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:18:39.827661 kernel: BTRFS info (device vda6): turning on async discard Oct 27 08:18:39.827730 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 08:18:39.829647 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:18:39.869399 ignition[1063]: INFO : Ignition 2.22.0 Oct 27 08:18:39.869399 ignition[1063]: INFO : Stage: files Oct 27 08:18:39.872013 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:18:39.872013 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:18:39.872013 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Oct 27 08:18:39.878159 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 08:18:39.878159 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 08:18:39.883905 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 08:18:39.886208 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 08:18:39.888428 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 08:18:39.888428 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:18:39.886781 unknown[1063]: wrote ssh authorized keys file for user: core Oct 27 08:18:39.895264 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 27 08:18:39.922896 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 08:18:40.047619 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:18:40.047619 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:18:40.054249 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:18:40.083986 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:18:40.083986 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:18:40.083986 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 27 08:18:40.513431 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 08:18:41.372481 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 27 08:18:41.372481 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 27 08:18:41.379358 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:18:41.385122 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:18:41.385122 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 27 08:18:41.385122 ignition[1063]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 27 08:18:41.393977 ignition[1063]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:18:41.393977 ignition[1063]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 08:18:41.393977 ignition[1063]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 27 08:18:41.393977 ignition[1063]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 08:18:41.413391 ignition[1063]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:18:41.423997 ignition[1063]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 08:18:41.426747 ignition[1063]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 08:18:41.426747 ignition[1063]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 27 08:18:41.426747 ignition[1063]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 08:18:41.426747 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:18:41.426747 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:18:41.426747 ignition[1063]: INFO : files: files passed Oct 27 08:18:41.426747 ignition[1063]: INFO : Ignition finished successfully Oct 27 08:18:41.443096 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 08:18:41.448008 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 08:18:41.451996 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 08:18:41.462915 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 08:18:41.463095 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 08:18:41.472165 initrd-setup-root-after-ignition[1094]: grep: /sysroot/oem/oem-release: No such file or directory Oct 27 08:18:41.478588 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf Oct 27 08:18:41.480828 initrd-setup-root-after-ignition[1096]: grep: Oct 27 08:18:41.482130 initrd-setup-root-after-ignition[1100]: : No such file or directory Oct 27 08:18:41.483754 initrd-setup-root-after-ignition[1096]: /sysroot/etc/flatcar/enabled-sysext.conf Oct 27 08:18:41.483714 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:18:41.491214 initrd-setup-root-after-ignition[1096]: : No such file or directory Oct 27 08:18:41.491214 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:18:41.484100 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 08:18:41.491335 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 08:18:41.569776 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 08:18:41.569944 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 08:18:41.573786 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 08:18:41.575609 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 08:18:41.582361 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 08:18:41.583435 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 08:18:41.617957 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:18:41.622256 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 08:18:41.654244 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:18:41.654406 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:18:41.660013 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:18:41.664941 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 08:18:41.668440 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 08:18:41.668756 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:18:41.675550 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 08:18:41.677510 systemd[1]: Stopped target basic.target - Basic System. Oct 27 08:18:41.680623 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 08:18:41.682102 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:18:41.685451 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 08:18:41.689384 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:18:41.693603 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 08:18:41.694572 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:18:41.700203 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 08:18:41.704227 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 08:18:41.709570 systemd[1]: Stopped target swap.target - Swaps. Oct 27 08:18:41.711241 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 08:18:41.711549 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:18:41.719269 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:18:41.721444 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:18:41.725408 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 08:18:41.725682 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:18:41.727661 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 08:18:41.727942 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 08:18:41.735369 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 08:18:41.735686 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:18:41.737589 systemd[1]: Stopped target paths.target - Path Units. Oct 27 08:18:41.741259 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 08:18:41.744578 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:18:41.748853 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 08:18:41.750521 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 08:18:41.751498 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 08:18:41.751685 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:18:41.757779 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 08:18:41.758042 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:18:41.762926 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 08:18:41.763151 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:18:41.764590 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 08:18:41.764833 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 08:18:41.773119 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 08:18:41.774857 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 08:18:41.775276 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:18:41.779966 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 08:18:41.783528 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 08:18:41.783796 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:18:41.787138 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 08:18:41.787324 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:18:41.791024 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 08:18:41.791233 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:18:41.801479 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 08:18:41.801633 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 08:18:41.833621 ignition[1120]: INFO : Ignition 2.22.0 Oct 27 08:18:41.833621 ignition[1120]: INFO : Stage: umount Oct 27 08:18:41.836822 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:18:41.836822 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 08:18:41.836822 ignition[1120]: INFO : umount: umount passed Oct 27 08:18:41.836822 ignition[1120]: INFO : Ignition finished successfully Oct 27 08:18:41.843690 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 08:18:41.844492 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 08:18:41.844649 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 08:18:41.849597 systemd[1]: Stopped target network.target - Network. Oct 27 08:18:41.855009 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 08:18:41.855136 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 08:18:41.859709 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 08:18:41.859807 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 08:18:41.864593 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 08:18:41.864689 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 08:18:41.867965 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 08:18:41.868037 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 08:18:41.871585 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 08:18:41.874648 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 08:18:41.889764 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 08:18:41.890004 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 08:18:41.898365 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 08:18:41.898567 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 08:18:41.906774 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 08:18:41.907302 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 08:18:41.907399 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:18:41.914084 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 08:18:41.915711 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 08:18:41.915818 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:18:41.919548 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 08:18:41.919632 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:18:41.922773 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 08:18:41.922841 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 08:18:41.926273 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:18:41.931136 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 08:18:41.931298 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 08:18:41.934734 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 08:18:41.934887 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 08:18:41.954511 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 08:18:41.954839 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:18:41.961756 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 08:18:41.961923 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 08:18:41.967120 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 08:18:41.967181 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:18:41.972252 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 08:18:41.972347 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:18:41.977269 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 08:18:41.977347 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 08:18:41.982316 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 08:18:41.982417 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:18:41.989720 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 08:18:41.991841 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 08:18:41.991945 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:18:41.996300 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 08:18:41.996371 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:18:41.998158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:18:41.998220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:18:41.999542 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 08:18:42.014135 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 08:18:42.024491 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 08:18:42.024723 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 08:18:42.026664 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 08:18:42.033580 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 08:18:42.066958 systemd[1]: Switching root. Oct 27 08:18:42.106546 systemd-journald[309]: Journal stopped Oct 27 08:18:44.026582 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Oct 27 08:18:44.026680 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 08:18:44.026710 kernel: SELinux: policy capability open_perms=1 Oct 27 08:18:44.026925 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 08:18:44.026955 kernel: SELinux: policy capability always_check_network=0 Oct 27 08:18:44.027005 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 08:18:44.027035 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 08:18:44.027064 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 08:18:44.027100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 08:18:44.027118 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 08:18:44.027158 kernel: audit: type=1403 audit(1761553123.046:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 08:18:44.027191 systemd[1]: Successfully loaded SELinux policy in 82.251ms. Oct 27 08:18:44.027231 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.178ms. Oct 27 08:18:44.027263 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:18:44.027302 systemd[1]: Detected virtualization kvm. Oct 27 08:18:44.027332 systemd[1]: Detected architecture x86-64. Oct 27 08:18:44.027363 systemd[1]: Detected first boot. Oct 27 08:18:44.027409 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:18:44.027439 zram_generator::config[1165]: No configuration found. Oct 27 08:18:44.027486 kernel: Guest personality initialized and is inactive Oct 27 08:18:44.027516 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 27 08:18:44.027552 kernel: Initialized host personality Oct 27 08:18:44.027580 kernel: NET: Registered PF_VSOCK protocol family Oct 27 08:18:44.027616 systemd[1]: Populated /etc with preset unit settings. Oct 27 08:18:44.027664 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 08:18:44.027694 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 08:18:44.027734 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 08:18:44.027764 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 08:18:44.027795 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 08:18:44.027830 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 08:18:44.027881 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 08:18:44.027930 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 08:18:44.027961 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 08:18:44.027993 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 08:18:44.028023 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 08:18:44.028058 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:18:44.028088 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:18:44.028119 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 08:18:44.028165 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 08:18:44.028198 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 08:18:44.028234 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:18:44.028265 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 27 08:18:44.028295 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:18:44.028325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:18:44.028380 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 08:18:44.028411 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 08:18:44.028441 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 08:18:44.028488 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 08:18:44.028519 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:18:44.028552 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:18:44.028582 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:18:44.028629 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:18:44.028659 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 08:18:44.028688 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 08:18:44.028717 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 08:18:44.028747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:18:44.028776 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:18:44.028807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:18:44.028852 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 08:18:44.028905 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 08:18:44.028936 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 08:18:44.028967 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 08:18:44.028998 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:18:44.029029 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 08:18:44.029060 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 08:18:44.029101 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 08:18:44.029120 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 08:18:44.029137 systemd[1]: Reached target machines.target - Containers. Oct 27 08:18:44.029162 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 08:18:44.029181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:18:44.029199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:18:44.029217 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 08:18:44.029250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:18:44.029280 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:18:44.029313 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:18:44.029342 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 08:18:44.029371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:18:44.029400 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 08:18:44.029452 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 08:18:44.029495 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 08:18:44.029526 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 08:18:44.029555 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 08:18:44.029587 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:18:44.029617 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:18:44.029646 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:18:44.029692 kernel: fuse: init (API version 7.41) Oct 27 08:18:44.029722 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:18:44.029752 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 08:18:44.029783 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 08:18:44.029822 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:18:44.029888 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:18:44.029918 kernel: ACPI: bus type drm_connector registered Oct 27 08:18:44.029949 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 08:18:44.029978 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 08:18:44.030037 systemd-journald[1229]: Collecting audit messages is disabled. Oct 27 08:18:44.030104 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 08:18:44.030136 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 08:18:44.030163 systemd-journald[1229]: Journal started Oct 27 08:18:44.030220 systemd-journald[1229]: Runtime Journal (/run/log/journal/9c6f4a9202764f1bb6f44e1e77cead0c) is 6M, max 48.3M, 42.2M free. Oct 27 08:18:43.723385 systemd[1]: Queued start job for default target multi-user.target. Oct 27 08:18:43.743035 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 08:18:43.743573 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 08:18:44.037883 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:18:44.040526 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 08:18:44.042800 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 08:18:44.045013 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 08:18:44.047615 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:18:44.050306 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 08:18:44.050541 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 08:18:44.053278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:18:44.053608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:18:44.056634 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:18:44.056983 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:18:44.059558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:18:44.059855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:18:44.062661 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 08:18:44.062972 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 08:18:44.065381 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:18:44.065687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:18:44.068131 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:18:44.070479 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:18:44.074484 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 08:18:44.077309 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 08:18:44.101253 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:18:44.104307 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 27 08:18:44.108575 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 08:18:44.111678 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 08:18:44.113666 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 08:18:44.113696 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:18:44.116585 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 08:18:44.120006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:18:44.127251 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 08:18:44.130679 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 08:18:44.132784 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:18:44.135364 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 08:18:44.137396 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:18:44.140363 systemd-journald[1229]: Time spent on flushing to /var/log/journal/9c6f4a9202764f1bb6f44e1e77cead0c is 19.964ms for 974 entries. Oct 27 08:18:44.140363 systemd-journald[1229]: System Journal (/var/log/journal/9c6f4a9202764f1bb6f44e1e77cead0c) is 8M, max 163.5M, 155.5M free. Oct 27 08:18:44.502443 systemd-journald[1229]: Received client request to flush runtime journal. Oct 27 08:18:44.502555 kernel: loop1: detected capacity change from 0 to 229808 Oct 27 08:18:44.502587 kernel: loop2: detected capacity change from 0 to 110984 Oct 27 08:18:44.502601 kernel: loop3: detected capacity change from 0 to 128048 Oct 27 08:18:44.140129 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:18:44.146362 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 08:18:44.149590 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 08:18:44.155219 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:18:44.158995 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 08:18:44.202245 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 08:18:44.222262 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:18:44.563293 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 08:18:44.566097 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 08:18:44.572910 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:18:44.579306 kernel: loop4: detected capacity change from 0 to 229808 Oct 27 08:18:44.577240 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:18:44.619151 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 08:18:44.625573 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Oct 27 08:18:44.625592 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Oct 27 08:18:44.629001 kernel: loop5: detected capacity change from 0 to 110984 Oct 27 08:18:44.634399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:18:44.640888 kernel: loop6: detected capacity change from 0 to 128048 Oct 27 08:18:44.650219 (sd-merge)[1299]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 27 08:18:44.665578 (sd-merge)[1299]: Merged extensions into '/usr'. Oct 27 08:18:44.673229 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 08:18:44.676103 systemd[1]: Reload requested from client PID 1284 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 08:18:44.676126 systemd[1]: Reloading... Oct 27 08:18:44.735060 zram_generator::config[1337]: No configuration found. Oct 27 08:18:44.771984 systemd-resolved[1300]: Positive Trust Anchors: Oct 27 08:18:44.772006 systemd-resolved[1300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:18:44.772010 systemd-resolved[1300]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:18:44.772052 systemd-resolved[1300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:18:44.777072 systemd-resolved[1300]: Defaulting to hostname 'linux'. Oct 27 08:18:44.931048 systemd[1]: Reloading finished in 254 ms. Oct 27 08:18:45.014576 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 08:18:45.016988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:18:45.019388 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 08:18:45.024106 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 08:18:45.026513 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:18:45.046337 systemd[1]: Starting ensure-sysext.service... Oct 27 08:18:45.048959 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 08:18:45.052731 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:18:45.188526 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 08:18:45.188567 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 08:18:45.188960 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 08:18:45.189245 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 08:18:45.190322 systemd-tmpfiles[1376]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 08:18:45.190613 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Oct 27 08:18:45.190688 systemd-tmpfiles[1376]: ACLs are not supported, ignoring. Oct 27 08:18:45.196556 systemd[1]: Reload requested from client PID 1374 ('systemctl') (unit ensure-sysext.service)... Oct 27 08:18:45.196577 systemd[1]: Reloading... Oct 27 08:18:45.246895 zram_generator::config[1409]: No configuration found. Oct 27 08:18:45.369696 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:18:45.369716 systemd-tmpfiles[1376]: Skipping /boot Oct 27 08:18:45.381564 systemd-tmpfiles[1376]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:18:45.381579 systemd-tmpfiles[1376]: Skipping /boot Oct 27 08:18:45.563925 systemd[1]: Reloading finished in 366 ms. Oct 27 08:18:45.603933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:18:45.614180 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:18:45.618166 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 08:18:45.625417 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 08:18:45.628352 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 08:18:45.631449 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 08:18:45.634702 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 08:18:45.640832 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:18:45.641267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:18:45.644194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:18:45.647343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:18:45.650851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:18:45.652742 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:18:45.652940 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:18:45.654974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:18:45.656889 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:18:45.659164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:18:45.659593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:18:45.684378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:18:45.684672 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:18:45.687246 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:18:45.687489 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:18:45.694606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:18:45.694787 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:18:45.696350 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:18:45.699307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:18:45.708989 systemd-udevd[1454]: Using default interface naming scheme 'v257'. Oct 27 08:18:45.709920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:18:45.713048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:18:45.713224 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:18:45.713337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:18:45.714840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:18:45.715168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:18:45.717571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:18:45.717794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:18:45.720873 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:18:45.721099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:18:45.729142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:18:45.729370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:18:45.730975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:18:45.734741 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:18:45.744152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:18:45.747810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:18:45.750119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:18:45.750332 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:18:45.750665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:18:45.752547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:18:45.753093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:18:45.756132 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:18:45.756373 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:18:45.758646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:18:45.758912 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:18:45.761398 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:18:45.761633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:18:45.769257 systemd[1]: Finished ensure-sysext.service. Oct 27 08:18:45.774587 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:18:45.774743 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:18:45.777211 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 08:18:45.779469 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 08:18:45.819538 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 08:18:45.843896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:18:45.892278 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:18:45.923946 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 08:18:45.944741 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 27 08:18:45.944836 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 08:18:45.981312 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 08:18:45.983262 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 08:18:46.001046 augenrules[1527]: No rules Oct 27 08:18:46.003492 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:18:46.004354 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:18:46.007443 kernel: mousedev: PS/2 mouse device common for all mice Oct 27 08:18:46.031890 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 27 08:18:46.041535 systemd-networkd[1513]: lo: Link UP Oct 27 08:18:46.041553 systemd-networkd[1513]: lo: Gained carrier Oct 27 08:18:46.042964 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 08:18:46.046074 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:18:46.046088 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:18:46.046330 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:18:46.048810 systemd-networkd[1513]: eth0: Link UP Oct 27 08:18:46.049098 systemd-networkd[1513]: eth0: Gained carrier Oct 27 08:18:46.049374 systemd[1]: Reached target network.target - Network. Oct 27 08:18:46.050038 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:18:46.055389 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 08:18:46.062092 kernel: ACPI: button: Power Button [PWRF] Oct 27 08:18:46.059239 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 08:18:46.065087 systemd-networkd[1513]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 08:18:46.065378 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 08:18:46.067318 systemd-timesyncd[1473]: Network configuration changed, trying to establish connection. Oct 27 08:18:46.597713 systemd-resolved[1300]: Clock change detected. Flushing caches. Oct 27 08:18:46.597784 systemd-timesyncd[1473]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 27 08:18:46.597853 systemd-timesyncd[1473]: Initial clock synchronization to Mon 2025-10-27 08:18:46.597658 UTC. Oct 27 08:18:46.611733 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 08:18:46.614631 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 08:18:46.701887 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 08:18:46.713257 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 08:18:46.722002 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 27 08:18:46.722341 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 27 08:18:46.796126 kernel: kvm_amd: TSC scaling supported Oct 27 08:18:46.796190 kernel: kvm_amd: Nested Virtualization enabled Oct 27 08:18:46.796227 kernel: kvm_amd: Nested Paging enabled Oct 27 08:18:46.797914 kernel: kvm_amd: LBR virtualization supported Oct 27 08:18:46.810934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:18:46.841879 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 27 08:18:46.841974 kernel: kvm_amd: Virtual GIF supported Oct 27 08:18:46.890594 kernel: EDAC MC: Ver: 3.0.0 Oct 27 08:18:47.012942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:18:47.165965 ldconfig[1447]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 08:18:47.173096 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 08:18:47.176737 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 08:18:47.213658 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 08:18:47.215820 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:18:47.217683 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 08:18:47.219728 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 08:18:47.221789 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 27 08:18:47.224034 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 08:18:47.226089 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 08:18:47.228357 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 08:18:47.230578 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 08:18:47.230617 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:18:47.232270 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:18:47.236456 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 08:18:47.242302 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 08:18:47.261272 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 08:18:47.264018 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 08:18:47.266365 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 08:18:47.271711 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 08:18:47.273991 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 08:18:47.277019 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 08:18:47.279951 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:18:47.281783 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:18:47.283588 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:18:47.283619 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:18:47.284780 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 08:18:47.287882 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 08:18:47.291249 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 08:18:47.299491 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 08:18:47.302860 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 08:18:47.304786 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 08:18:47.305968 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 27 08:18:47.309264 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 08:18:47.311807 jq[1574]: false Oct 27 08:18:47.314562 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 08:18:47.317781 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 08:18:47.322088 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 08:18:47.327159 google_oslogin_nss_cache[1576]: oslogin_cache_refresh[1576]: Refreshing passwd entry cache Oct 27 08:18:47.327436 oslogin_cache_refresh[1576]: Refreshing passwd entry cache Oct 27 08:18:47.328838 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 08:18:47.330717 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 08:18:47.331234 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 08:18:47.334256 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 08:18:47.335078 extend-filesystems[1575]: Found /dev/vda6 Oct 27 08:18:47.338191 google_oslogin_nss_cache[1576]: oslogin_cache_refresh[1576]: Failure getting users, quitting Oct 27 08:18:47.338184 oslogin_cache_refresh[1576]: Failure getting users, quitting Oct 27 08:18:47.338287 google_oslogin_nss_cache[1576]: oslogin_cache_refresh[1576]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:18:47.338287 google_oslogin_nss_cache[1576]: oslogin_cache_refresh[1576]: Refreshing group entry cache Oct 27 08:18:47.338211 oslogin_cache_refresh[1576]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:18:47.338285 oslogin_cache_refresh[1576]: Refreshing group entry cache Oct 27 08:18:47.338846 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 08:18:47.348443 google_oslogin_nss_cache[1576]: oslogin_cache_refresh[1576]: Failure getting groups, quitting Oct 27 08:18:47.348443 google_oslogin_nss_cache[1576]: oslogin_cache_refresh[1576]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:18:47.343663 oslogin_cache_refresh[1576]: Failure getting groups, quitting Oct 27 08:18:47.348610 extend-filesystems[1575]: Found /dev/vda9 Oct 27 08:18:47.346379 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 08:18:47.343675 oslogin_cache_refresh[1576]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:18:47.348803 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 08:18:47.349086 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 08:18:47.349534 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 27 08:18:47.349836 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 27 08:18:47.350850 extend-filesystems[1575]: Checking size of /dev/vda9 Oct 27 08:18:47.352357 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 08:18:47.352623 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 08:18:47.357119 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 08:18:47.361456 extend-filesystems[1575]: Resized partition /dev/vda9 Oct 27 08:18:47.362980 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 08:18:47.364348 extend-filesystems[1608]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 08:18:47.370515 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 27 08:18:47.375953 jq[1591]: true Oct 27 08:18:47.394204 tar[1602]: linux-amd64/LICENSE Oct 27 08:18:47.399320 jq[1621]: true Oct 27 08:18:47.399559 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 27 08:18:47.410258 update_engine[1587]: I20251027 08:18:47.410163 1587 main.cc:92] Flatcar Update Engine starting Oct 27 08:18:47.413824 (ntainerd)[1611]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 08:18:47.446032 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 08:18:47.455517 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 08:18:47.455517 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 08:18:47.455517 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 27 08:18:47.473028 extend-filesystems[1575]: Resized filesystem in /dev/vda9 Oct 27 08:18:47.457310 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 08:18:47.474749 tar[1602]: linux-amd64/helm Oct 27 08:18:47.457675 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 08:18:47.500423 systemd-logind[1586]: Watching system buttons on /dev/input/event2 (Power Button) Oct 27 08:18:47.500456 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 27 08:18:47.557532 bash[1644]: Updated "/home/core/.ssh/authorized_keys" Oct 27 08:18:47.501316 systemd-logind[1586]: New seat seat0. Oct 27 08:18:47.503413 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 08:18:47.559541 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 08:18:47.563970 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 08:18:47.565156 dbus-daemon[1572]: [system] SELinux support is enabled Oct 27 08:18:47.572116 dbus-daemon[1572]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 27 08:18:47.573874 update_engine[1587]: I20251027 08:18:47.572597 1587 update_check_scheduler.cc:74] Next update check in 11m32s Oct 27 08:18:47.565431 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 08:18:47.570316 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 08:18:47.570356 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 08:18:47.573027 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 08:18:47.573053 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 08:18:47.575352 systemd[1]: Started update-engine.service - Update Engine. Oct 27 08:18:47.580723 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 08:18:47.645771 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 08:18:47.699715 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 08:18:47.704867 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 08:18:47.707245 locksmithd[1646]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 08:18:47.710555 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:56106.service - OpenSSH per-connection server daemon (10.0.0.1:56106). Oct 27 08:18:47.751067 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 08:18:47.751757 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 08:18:47.760143 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 08:18:47.814059 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 08:18:47.819882 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 08:18:47.825652 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 27 08:18:47.828232 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 08:18:47.943874 systemd-networkd[1513]: eth0: Gained IPv6LL Oct 27 08:18:47.967024 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 08:18:47.970755 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 08:18:47.975893 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 27 08:18:47.980439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:18:47.990546 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 08:18:48.050716 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 08:18:48.053687 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 56106 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:48.057150 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:48.057404 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 08:18:48.058019 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 27 08:18:48.060918 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 08:18:48.073376 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 08:18:48.081720 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 08:18:48.132791 systemd-logind[1586]: New session 1 of user core. Oct 27 08:18:48.158159 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 08:18:48.165936 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 08:18:48.182590 containerd[1611]: time="2025-10-27T08:18:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 08:18:48.183611 containerd[1611]: time="2025-10-27T08:18:48.183566922Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 27 08:18:48.201745 containerd[1611]: time="2025-10-27T08:18:48.201393543Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="29.666µs" Oct 27 08:18:48.201745 containerd[1611]: time="2025-10-27T08:18:48.201519770Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 08:18:48.201745 containerd[1611]: time="2025-10-27T08:18:48.201568041Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 08:18:48.202085 containerd[1611]: time="2025-10-27T08:18:48.201806909Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 08:18:48.202085 containerd[1611]: time="2025-10-27T08:18:48.201822257Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 08:18:48.202085 containerd[1611]: time="2025-10-27T08:18:48.201851021Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:18:48.202085 containerd[1611]: time="2025-10-27T08:18:48.202058200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:18:48.202085 containerd[1611]: time="2025-10-27T08:18:48.202070283Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:18:48.202793 containerd[1611]: time="2025-10-27T08:18:48.202419207Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:18:48.202793 containerd[1611]: time="2025-10-27T08:18:48.202442781Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:18:48.202793 containerd[1611]: time="2025-10-27T08:18:48.202566393Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:18:48.202793 containerd[1611]: time="2025-10-27T08:18:48.202576001Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 08:18:48.203117 containerd[1611]: time="2025-10-27T08:18:48.202940735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 08:18:48.203362 containerd[1611]: time="2025-10-27T08:18:48.203332950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:18:48.203423 containerd[1611]: time="2025-10-27T08:18:48.203379858Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:18:48.203423 containerd[1611]: time="2025-10-27T08:18:48.203391500Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 08:18:48.203535 containerd[1611]: time="2025-10-27T08:18:48.203514832Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 08:18:48.203958 containerd[1611]: time="2025-10-27T08:18:48.203931703Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 08:18:48.204051 containerd[1611]: time="2025-10-27T08:18:48.204027743Z" level=info msg="metadata content store policy set" policy=shared Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.211954158Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212052222Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212079724Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212154073Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212170574Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212185061Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212202484Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212219105Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212230998Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212241337Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212250945Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212264691Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212415133Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 08:18:48.213193 containerd[1611]: time="2025-10-27T08:18:48.212435571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212452673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212483541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212495484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212522133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212534938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212547962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212559173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212569352Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212581956Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212672505Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212686191Z" level=info msg="Start snapshots syncer" Oct 27 08:18:48.213665 containerd[1611]: time="2025-10-27T08:18:48.212731817Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 08:18:48.213962 containerd[1611]: time="2025-10-27T08:18:48.213137247Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 08:18:48.213962 containerd[1611]: time="2025-10-27T08:18:48.213204513Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213291316Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213484799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213507181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213518071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213529843Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213542727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213553638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213566893Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213609112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213624831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213636022Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213781625Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213806843Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:18:48.214213 containerd[1611]: time="2025-10-27T08:18:48.213817022Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:18:48.214554 containerd[1611]: time="2025-10-27T08:18:48.213827421Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:18:48.214554 containerd[1611]: time="2025-10-27T08:18:48.213836348Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 08:18:48.214554 containerd[1611]: time="2025-10-27T08:18:48.213847108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 08:18:48.214554 containerd[1611]: time="2025-10-27T08:18:48.213862958Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 08:18:48.214554 containerd[1611]: time="2025-10-27T08:18:48.213903384Z" level=info msg="runtime interface created" Oct 27 08:18:48.214554 containerd[1611]: time="2025-10-27T08:18:48.213909896Z" level=info msg="created NRI interface" Oct 27 08:18:48.214554 containerd[1611]: time="2025-10-27T08:18:48.213919364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 08:18:48.214554 containerd[1611]: time="2025-10-27T08:18:48.213938069Z" level=info msg="Connect containerd service" Oct 27 08:18:48.214554 containerd[1611]: time="2025-10-27T08:18:48.213974627Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 08:18:48.214939 containerd[1611]: time="2025-10-27T08:18:48.214911855Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 08:18:48.219105 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 08:18:48.222902 systemd-logind[1586]: New session c1 of user core. Oct 27 08:18:48.352679 tar[1602]: linux-amd64/README.md Oct 27 08:18:48.376825 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 08:18:48.500540 systemd[1696]: Queued start job for default target default.target. Oct 27 08:18:48.511092 systemd[1696]: Created slice app.slice - User Application Slice. Oct 27 08:18:48.511653 systemd[1696]: Reached target paths.target - Paths. Oct 27 08:18:48.511798 systemd[1696]: Reached target timers.target - Timers. Oct 27 08:18:48.513575 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 08:18:48.528409 containerd[1611]: time="2025-10-27T08:18:48.528256939Z" level=info msg="Start subscribing containerd event" Oct 27 08:18:48.528409 containerd[1611]: time="2025-10-27T08:18:48.528348881Z" level=info msg="Start recovering state" Oct 27 08:18:48.528534 containerd[1611]: time="2025-10-27T08:18:48.528523469Z" level=info msg="Start event monitor" Oct 27 08:18:48.528586 containerd[1611]: time="2025-10-27T08:18:48.528547113Z" level=info msg="Start cni network conf syncer for default" Oct 27 08:18:48.528586 containerd[1611]: time="2025-10-27T08:18:48.528555178Z" level=info msg="Start streaming server" Oct 27 08:18:48.528625 containerd[1611]: time="2025-10-27T08:18:48.528598369Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 08:18:48.528625 containerd[1611]: time="2025-10-27T08:18:48.528612045Z" level=info msg="runtime interface starting up..." Oct 27 08:18:48.528625 containerd[1611]: time="2025-10-27T08:18:48.528621733Z" level=info msg="starting plugins..." Oct 27 08:18:48.528677 containerd[1611]: time="2025-10-27T08:18:48.528647331Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 08:18:48.531246 containerd[1611]: time="2025-10-27T08:18:48.528832088Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 08:18:48.531246 containerd[1611]: time="2025-10-27T08:18:48.528902710Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 08:18:48.529191 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 08:18:48.531722 containerd[1611]: time="2025-10-27T08:18:48.531692812Z" level=info msg="containerd successfully booted in 0.355397s" Oct 27 08:18:48.541215 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 08:18:48.541360 systemd[1696]: Reached target sockets.target - Sockets. Oct 27 08:18:48.541670 systemd[1696]: Reached target basic.target - Basic System. Oct 27 08:18:48.541781 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 08:18:48.543716 systemd[1696]: Reached target default.target - Main User Target. Oct 27 08:18:48.543759 systemd[1696]: Startup finished in 277ms. Oct 27 08:18:48.559733 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 08:18:48.629399 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:56118.service - OpenSSH per-connection server daemon (10.0.0.1:56118). Oct 27 08:18:48.689116 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 56118 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:48.690741 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:48.695713 systemd-logind[1586]: New session 2 of user core. Oct 27 08:18:48.705616 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 08:18:48.762135 sshd[1730]: Connection closed by 10.0.0.1 port 56118 Oct 27 08:18:48.762517 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:48.934277 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:56118.service: Deactivated successfully. Oct 27 08:18:48.937002 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 08:18:48.937885 systemd-logind[1586]: Session 2 logged out. Waiting for processes to exit. Oct 27 08:18:48.941401 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:56134.service - OpenSSH per-connection server daemon (10.0.0.1:56134). Oct 27 08:18:48.944698 systemd-logind[1586]: Removed session 2. Oct 27 08:18:49.080442 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 56134 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:49.082380 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:49.087435 systemd-logind[1586]: New session 3 of user core. Oct 27 08:18:49.098616 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 08:18:49.159633 sshd[1740]: Connection closed by 10.0.0.1 port 56134 Oct 27 08:18:49.160120 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:49.198004 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:56134.service: Deactivated successfully. Oct 27 08:18:49.200274 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 08:18:49.201106 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Oct 27 08:18:49.202796 systemd-logind[1586]: Removed session 3. Oct 27 08:18:49.627838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:18:49.630415 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 08:18:49.632501 systemd[1]: Startup finished in 3.141s (kernel) + 9.006s (initrd) + 6.137s (userspace) = 18.286s. Oct 27 08:18:49.676862 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:18:50.242446 kubelet[1750]: E1027 08:18:50.242353 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:18:50.247745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:18:50.247967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:18:50.248366 systemd[1]: kubelet.service: Consumed 1.895s CPU time, 267.4M memory peak. Oct 27 08:18:59.175193 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:56968.service - OpenSSH per-connection server daemon (10.0.0.1:56968). Oct 27 08:18:59.228363 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 56968 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:59.229952 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:59.234488 systemd-logind[1586]: New session 4 of user core. Oct 27 08:18:59.244607 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 08:18:59.299160 sshd[1767]: Connection closed by 10.0.0.1 port 56968 Oct 27 08:18:59.299505 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:59.314945 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:56968.service: Deactivated successfully. Oct 27 08:18:59.316763 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 08:18:59.317631 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Oct 27 08:18:59.320436 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:56972.service - OpenSSH per-connection server daemon (10.0.0.1:56972). Oct 27 08:18:59.321060 systemd-logind[1586]: Removed session 4. Oct 27 08:18:59.376542 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 56972 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:59.378162 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:59.382554 systemd-logind[1586]: New session 5 of user core. Oct 27 08:18:59.392602 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 08:18:59.443719 sshd[1777]: Connection closed by 10.0.0.1 port 56972 Oct 27 08:18:59.444020 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:59.461936 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:56972.service: Deactivated successfully. Oct 27 08:18:59.463678 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 08:18:59.464384 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Oct 27 08:18:59.467048 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:56984.service - OpenSSH per-connection server daemon (10.0.0.1:56984). Oct 27 08:18:59.467658 systemd-logind[1586]: Removed session 5. Oct 27 08:18:59.524607 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 56984 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:59.525794 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:59.530310 systemd-logind[1586]: New session 6 of user core. Oct 27 08:18:59.543632 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 08:18:59.597458 sshd[1786]: Connection closed by 10.0.0.1 port 56984 Oct 27 08:18:59.597923 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:59.609211 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:56984.service: Deactivated successfully. Oct 27 08:18:59.611135 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 08:18:59.612054 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Oct 27 08:18:59.614869 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:56998.service - OpenSSH per-connection server daemon (10.0.0.1:56998). Oct 27 08:18:59.615698 systemd-logind[1586]: Removed session 6. Oct 27 08:18:59.672862 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 56998 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:59.674430 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:59.679362 systemd-logind[1586]: New session 7 of user core. Oct 27 08:18:59.693623 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 08:18:59.759286 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 08:18:59.759634 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:18:59.779259 sudo[1796]: pam_unix(sudo:session): session closed for user root Oct 27 08:18:59.781881 sshd[1795]: Connection closed by 10.0.0.1 port 56998 Oct 27 08:18:59.782313 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Oct 27 08:18:59.801953 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:56998.service: Deactivated successfully. Oct 27 08:18:59.804061 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 08:18:59.804935 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Oct 27 08:18:59.807944 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:57006.service - OpenSSH per-connection server daemon (10.0.0.1:57006). Oct 27 08:18:59.808735 systemd-logind[1586]: Removed session 7. Oct 27 08:18:59.864882 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 57006 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:18:59.866392 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:18:59.871273 systemd-logind[1586]: New session 8 of user core. Oct 27 08:18:59.884821 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 08:18:59.941509 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 08:18:59.941839 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:18:59.948923 sudo[1807]: pam_unix(sudo:session): session closed for user root Oct 27 08:18:59.957742 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 08:18:59.958066 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:18:59.969353 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:19:00.024008 augenrules[1829]: No rules Oct 27 08:19:00.025981 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:19:00.026304 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:19:00.027714 sudo[1806]: pam_unix(sudo:session): session closed for user root Oct 27 08:19:00.029903 sshd[1805]: Connection closed by 10.0.0.1 port 57006 Oct 27 08:19:00.030333 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Oct 27 08:19:00.050072 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:57006.service: Deactivated successfully. Oct 27 08:19:00.052848 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 08:19:00.053794 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Oct 27 08:19:00.057043 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:51976.service - OpenSSH per-connection server daemon (10.0.0.1:51976). Oct 27 08:19:00.058008 systemd-logind[1586]: Removed session 8. Oct 27 08:19:00.109576 sshd[1838]: Accepted publickey for core from 10.0.0.1 port 51976 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:19:00.111386 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:19:00.116300 systemd-logind[1586]: New session 9 of user core. Oct 27 08:19:00.129705 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 08:19:00.187944 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 08:19:00.188358 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:19:00.279392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 08:19:00.281066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:19:00.747418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:19:00.817868 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:19:00.875245 kubelet[1869]: E1027 08:19:00.875161 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:19:00.882714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:19:00.882918 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:19:00.883316 systemd[1]: kubelet.service: Consumed 401ms CPU time, 110.9M memory peak. Oct 27 08:19:01.193421 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 08:19:01.223050 (dockerd)[1878]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 08:19:01.721816 dockerd[1878]: time="2025-10-27T08:19:01.721728147Z" level=info msg="Starting up" Oct 27 08:19:01.722569 dockerd[1878]: time="2025-10-27T08:19:01.722531233Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 08:19:01.765781 dockerd[1878]: time="2025-10-27T08:19:01.765709037Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 08:19:02.101521 dockerd[1878]: time="2025-10-27T08:19:02.101435955Z" level=info msg="Loading containers: start." Oct 27 08:19:02.113497 kernel: Initializing XFRM netlink socket Oct 27 08:19:02.399058 systemd-networkd[1513]: docker0: Link UP Oct 27 08:19:02.403984 dockerd[1878]: time="2025-10-27T08:19:02.403927294Z" level=info msg="Loading containers: done." Oct 27 08:19:02.426182 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4123237489-merged.mount: Deactivated successfully. Oct 27 08:19:02.426690 dockerd[1878]: time="2025-10-27T08:19:02.426497591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 08:19:02.426690 dockerd[1878]: time="2025-10-27T08:19:02.426619650Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 08:19:02.426775 dockerd[1878]: time="2025-10-27T08:19:02.426717534Z" level=info msg="Initializing buildkit" Oct 27 08:19:02.706763 dockerd[1878]: time="2025-10-27T08:19:02.706597619Z" level=info msg="Completed buildkit initialization" Oct 27 08:19:02.713584 dockerd[1878]: time="2025-10-27T08:19:02.713506636Z" level=info msg="Daemon has completed initialization" Oct 27 08:19:02.713735 dockerd[1878]: time="2025-10-27T08:19:02.713623004Z" level=info msg="API listen on /run/docker.sock" Oct 27 08:19:02.713893 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 08:19:03.723175 containerd[1611]: time="2025-10-27T08:19:03.723122175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 27 08:19:04.407506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847228933.mount: Deactivated successfully. Oct 27 08:19:05.561629 containerd[1611]: time="2025-10-27T08:19:05.561553771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:05.562270 containerd[1611]: time="2025-10-27T08:19:05.562229708Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 27 08:19:05.563562 containerd[1611]: time="2025-10-27T08:19:05.563535136Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:05.566318 containerd[1611]: time="2025-10-27T08:19:05.566286015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:05.569375 containerd[1611]: time="2025-10-27T08:19:05.569316519Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.84613383s" Oct 27 08:19:05.569375 containerd[1611]: time="2025-10-27T08:19:05.569370440Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 27 08:19:05.570517 containerd[1611]: time="2025-10-27T08:19:05.570306725Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 27 08:19:07.210616 containerd[1611]: time="2025-10-27T08:19:07.210547302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:07.211279 containerd[1611]: time="2025-10-27T08:19:07.211229682Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 27 08:19:07.212500 containerd[1611]: time="2025-10-27T08:19:07.212442586Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:07.215327 containerd[1611]: time="2025-10-27T08:19:07.215270951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:07.216133 containerd[1611]: time="2025-10-27T08:19:07.216086209Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.64574541s" Oct 27 08:19:07.216197 containerd[1611]: time="2025-10-27T08:19:07.216138327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 27 08:19:07.216940 containerd[1611]: time="2025-10-27T08:19:07.216913301Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 27 08:19:09.318583 containerd[1611]: time="2025-10-27T08:19:09.318504685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:09.319544 containerd[1611]: time="2025-10-27T08:19:09.319510761Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 27 08:19:09.320570 containerd[1611]: time="2025-10-27T08:19:09.320523350Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:09.323402 containerd[1611]: time="2025-10-27T08:19:09.323322790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:09.324335 containerd[1611]: time="2025-10-27T08:19:09.324284123Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.107339083s" Oct 27 08:19:09.324335 containerd[1611]: time="2025-10-27T08:19:09.324327975Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 27 08:19:09.325024 containerd[1611]: time="2025-10-27T08:19:09.324988624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 27 08:19:10.835016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354211339.mount: Deactivated successfully. Oct 27 08:19:11.028697 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 08:19:11.031056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:19:11.484836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:19:11.499894 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:19:11.581745 kubelet[2180]: E1027 08:19:11.581675 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:19:11.587574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:19:11.587984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:19:11.588492 systemd[1]: kubelet.service: Consumed 275ms CPU time, 108.6M memory peak. Oct 27 08:19:12.341559 containerd[1611]: time="2025-10-27T08:19:12.341432213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:12.342258 containerd[1611]: time="2025-10-27T08:19:12.342160098Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 27 08:19:12.343597 containerd[1611]: time="2025-10-27T08:19:12.343504409Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:12.345562 containerd[1611]: time="2025-10-27T08:19:12.345501304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:12.346075 containerd[1611]: time="2025-10-27T08:19:12.346018824Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 3.020999192s" Oct 27 08:19:12.346075 containerd[1611]: time="2025-10-27T08:19:12.346065692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 27 08:19:12.347100 containerd[1611]: time="2025-10-27T08:19:12.347068292Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 27 08:19:13.210261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount828439342.mount: Deactivated successfully. Oct 27 08:19:14.956650 containerd[1611]: time="2025-10-27T08:19:14.956555372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:14.957440 containerd[1611]: time="2025-10-27T08:19:14.957389697Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 27 08:19:14.958869 containerd[1611]: time="2025-10-27T08:19:14.958827163Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:14.961741 containerd[1611]: time="2025-10-27T08:19:14.961696824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:14.963033 containerd[1611]: time="2025-10-27T08:19:14.962966666Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.615860032s" Oct 27 08:19:14.963033 containerd[1611]: time="2025-10-27T08:19:14.963030535Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 27 08:19:14.963618 containerd[1611]: time="2025-10-27T08:19:14.963568795Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 27 08:19:15.480141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94383254.mount: Deactivated successfully. Oct 27 08:19:15.486087 containerd[1611]: time="2025-10-27T08:19:15.486021430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:19:15.486752 containerd[1611]: time="2025-10-27T08:19:15.486703158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 27 08:19:15.487817 containerd[1611]: time="2025-10-27T08:19:15.487779898Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:19:15.489808 containerd[1611]: time="2025-10-27T08:19:15.489771893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:19:15.490413 containerd[1611]: time="2025-10-27T08:19:15.490368051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.764671ms" Oct 27 08:19:15.490413 containerd[1611]: time="2025-10-27T08:19:15.490411323Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 27 08:19:15.490983 containerd[1611]: time="2025-10-27T08:19:15.490922962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 27 08:19:16.023787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971130159.mount: Deactivated successfully. Oct 27 08:19:18.430852 containerd[1611]: time="2025-10-27T08:19:18.430747823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:18.431607 containerd[1611]: time="2025-10-27T08:19:18.431500475Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 27 08:19:18.432799 containerd[1611]: time="2025-10-27T08:19:18.432751090Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:18.435690 containerd[1611]: time="2025-10-27T08:19:18.435644276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:18.436816 containerd[1611]: time="2025-10-27T08:19:18.436748867Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.945790258s" Oct 27 08:19:18.436884 containerd[1611]: time="2025-10-27T08:19:18.436814029Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 27 08:19:21.619872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 27 08:19:21.622094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:19:21.639609 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 08:19:21.639730 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 08:19:21.640104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:19:21.643036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:19:21.670830 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-9.scope)... Oct 27 08:19:21.670847 systemd[1]: Reloading... Oct 27 08:19:21.767940 zram_generator::config[2382]: No configuration found. Oct 27 08:19:22.257032 systemd[1]: Reloading finished in 585 ms. Oct 27 08:19:22.323167 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 08:19:22.323280 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 08:19:22.323604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:19:22.323646 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.4M memory peak. Oct 27 08:19:22.325178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:19:22.520551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:19:22.531799 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:19:22.588181 kubelet[2424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:19:22.588181 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:19:22.588181 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:19:22.588700 kubelet[2424]: I1027 08:19:22.588254 2424 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:19:23.132969 kubelet[2424]: I1027 08:19:23.132437 2424 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 08:19:23.132969 kubelet[2424]: I1027 08:19:23.132977 2424 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:19:23.133392 kubelet[2424]: I1027 08:19:23.133368 2424 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:19:23.165666 kubelet[2424]: I1027 08:19:23.165575 2424 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:19:23.166492 kubelet[2424]: E1027 08:19:23.166384 2424 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 08:19:23.177822 kubelet[2424]: I1027 08:19:23.177794 2424 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:19:23.186532 kubelet[2424]: I1027 08:19:23.186493 2424 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:19:23.186862 kubelet[2424]: I1027 08:19:23.186801 2424 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:19:23.187049 kubelet[2424]: I1027 08:19:23.186841 2424 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:19:23.187239 kubelet[2424]: I1027 08:19:23.187052 2424 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:19:23.187239 kubelet[2424]: I1027 08:19:23.187064 2424 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 08:19:23.187315 kubelet[2424]: I1027 08:19:23.187253 2424 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:19:23.190256 kubelet[2424]: I1027 08:19:23.190213 2424 kubelet.go:480] "Attempting to sync node with API server" Oct 27 08:19:23.190329 kubelet[2424]: I1027 08:19:23.190268 2424 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:19:23.190329 kubelet[2424]: I1027 08:19:23.190321 2424 kubelet.go:386] "Adding apiserver pod source" Oct 27 08:19:23.192896 kubelet[2424]: I1027 08:19:23.192723 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:19:23.374262 kubelet[2424]: E1027 08:19:23.372068 2424 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:19:23.376150 kubelet[2424]: I1027 08:19:23.376097 2424 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:19:23.376361 kubelet[2424]: E1027 08:19:23.376257 2424 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:19:23.377102 kubelet[2424]: I1027 08:19:23.376878 2424 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:19:23.377584 kubelet[2424]: W1027 08:19:23.377559 2424 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 08:19:23.381239 kubelet[2424]: I1027 08:19:23.381219 2424 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:19:23.381298 kubelet[2424]: I1027 08:19:23.381276 2424 server.go:1289] "Started kubelet" Oct 27 08:19:23.384437 kubelet[2424]: I1027 08:19:23.384260 2424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:19:23.385001 kubelet[2424]: I1027 08:19:23.384653 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:19:23.385001 kubelet[2424]: I1027 08:19:23.384713 2424 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:19:23.385001 kubelet[2424]: I1027 08:19:23.384768 2424 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:19:23.386045 kubelet[2424]: I1027 08:19:23.385863 2424 server.go:317] "Adding debug handlers to kubelet server" Oct 27 08:19:23.388272 kubelet[2424]: I1027 08:19:23.386634 2424 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:19:23.388872 kubelet[2424]: E1027 08:19:23.388630 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:19:23.388872 kubelet[2424]: I1027 08:19:23.388685 2424 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:19:23.388872 kubelet[2424]: I1027 08:19:23.388869 2424 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:19:23.389112 kubelet[2424]: I1027 08:19:23.388917 2424 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:19:23.389326 kubelet[2424]: E1027 08:19:23.389296 2424 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:19:23.389495 kubelet[2424]: E1027 08:19:23.389452 2424 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:19:23.391495 kubelet[2424]: E1027 08:19:23.389718 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Oct 27 08:19:23.391495 kubelet[2424]: I1027 08:19:23.391027 2424 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:19:23.391495 kubelet[2424]: I1027 08:19:23.391133 2424 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:19:23.393487 kubelet[2424]: I1027 08:19:23.392197 2424 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:19:24.302177 kubelet[2424]: E1027 08:19:23.389089 2424 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18724b44dc4df514 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 08:19:23.381241108 +0000 UTC m=+0.845172910,LastTimestamp:2025-10-27 08:19:23.381241108 +0000 UTC m=+0.845172910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 08:19:24.304763 kubelet[2424]: E1027 08:19:24.302776 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:19:24.304763 kubelet[2424]: E1027 08:19:24.304402 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Oct 27 08:19:24.312185 kubelet[2424]: I1027 08:19:24.312166 2424 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:19:24.312301 kubelet[2424]: I1027 08:19:24.312281 2424 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:19:24.312348 kubelet[2424]: I1027 08:19:24.312316 2424 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:19:24.373212 kubelet[2424]: E1027 08:19:24.373179 2424 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:19:24.403398 kubelet[2424]: E1027 08:19:24.403352 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:19:24.503893 kubelet[2424]: E1027 08:19:24.503840 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:19:24.820210 kubelet[2424]: I1027 08:19:24.820039 2424 policy_none.go:49] "None policy: Start" Oct 27 08:19:24.820210 kubelet[2424]: I1027 08:19:24.820103 2424 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:19:24.820210 kubelet[2424]: I1027 08:19:24.820137 2424 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:19:24.834201 kubelet[2424]: E1027 08:19:24.833576 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:19:24.834201 kubelet[2424]: E1027 08:19:24.834137 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Oct 27 08:19:24.834419 kubelet[2424]: E1027 08:19:24.834180 2424 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:19:24.839255 kubelet[2424]: I1027 08:19:24.839182 2424 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 08:19:24.841211 kubelet[2424]: I1027 08:19:24.841189 2424 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 08:19:24.841384 kubelet[2424]: I1027 08:19:24.841242 2424 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 08:19:24.841384 kubelet[2424]: I1027 08:19:24.841319 2424 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:19:24.841914 kubelet[2424]: E1027 08:19:24.841885 2424 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:19:24.841981 kubelet[2424]: I1027 08:19:24.841931 2424 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 08:19:24.842014 kubelet[2424]: E1027 08:19:24.841986 2424 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:19:24.909664 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 08:19:24.918859 kubelet[2424]: E1027 08:19:24.918798 2424 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:19:24.926763 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 08:19:24.931757 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 08:19:24.934662 kubelet[2424]: E1027 08:19:24.934633 2424 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 08:19:24.942937 kubelet[2424]: E1027 08:19:24.942836 2424 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 27 08:19:24.947212 kubelet[2424]: E1027 08:19:24.947170 2424 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:19:24.947573 kubelet[2424]: I1027 08:19:24.947554 2424 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:19:24.947660 kubelet[2424]: I1027 08:19:24.947577 2424 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:19:24.948930 kubelet[2424]: I1027 08:19:24.948522 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:19:24.949343 kubelet[2424]: E1027 08:19:24.949273 2424 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:19:24.949343 kubelet[2424]: E1027 08:19:24.949322 2424 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 08:19:25.050077 kubelet[2424]: I1027 08:19:25.050024 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:19:25.050612 kubelet[2424]: E1027 08:19:25.050552 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Oct 27 08:19:25.155577 systemd[1]: Created slice kubepods-burstable-pode38e378303c90433dea04b89d602f831.slice - libcontainer container kubepods-burstable-pode38e378303c90433dea04b89d602f831.slice. Oct 27 08:19:25.237244 kubelet[2424]: I1027 08:19:25.237160 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:25.237244 kubelet[2424]: I1027 08:19:25.237219 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e38e378303c90433dea04b89d602f831-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e38e378303c90433dea04b89d602f831\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:25.237244 kubelet[2424]: I1027 08:19:25.237253 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e38e378303c90433dea04b89d602f831-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e38e378303c90433dea04b89d602f831\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:25.237537 kubelet[2424]: I1027 08:19:25.237274 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:25.237537 kubelet[2424]: I1027 08:19:25.237293 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:25.237537 kubelet[2424]: I1027 08:19:25.237310 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:25.237537 kubelet[2424]: I1027 08:19:25.237332 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:25.237537 kubelet[2424]: I1027 08:19:25.237353 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e38e378303c90433dea04b89d602f831-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e38e378303c90433dea04b89d602f831\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:25.237946 kubelet[2424]: I1027 08:19:25.237377 2424 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:25.252925 kubelet[2424]: I1027 08:19:25.252883 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:19:25.253315 kubelet[2424]: E1027 08:19:25.253271 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Oct 27 08:19:25.276739 kubelet[2424]: E1027 08:19:25.276706 2424 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 08:19:25.429142 kubelet[2424]: E1027 08:19:25.428996 2424 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:19:25.429638 kubelet[2424]: E1027 08:19:25.429561 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:25.430511 containerd[1611]: time="2025-10-27T08:19:25.430441859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e38e378303c90433dea04b89d602f831,Namespace:kube-system,Attempt:0,}" Oct 27 08:19:25.433617 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 27 08:19:25.454835 kubelet[2424]: E1027 08:19:25.454784 2424 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:19:25.455182 kubelet[2424]: E1027 08:19:25.455160 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:25.455723 containerd[1611]: time="2025-10-27T08:19:25.455684845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 27 08:19:25.457388 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 27 08:19:25.460414 kubelet[2424]: E1027 08:19:25.460386 2424 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:19:25.461029 kubelet[2424]: E1027 08:19:25.461005 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:25.461867 containerd[1611]: time="2025-10-27T08:19:25.461803192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 27 08:19:25.465643 containerd[1611]: time="2025-10-27T08:19:25.465593752Z" level=info msg="connecting to shim aa30803f85b8f2f2b215456fa9607e9c7f6742835d34c23cc1420a19fda69acc" address="unix:///run/containerd/s/51456393ea79651b87f767da642237f18304f1d1d5a7e0c3c5b6863ef7a0e7e0" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:19:25.552519 containerd[1611]: time="2025-10-27T08:19:25.552439293Z" level=info msg="connecting to shim 5456f8e496fe5f0671ed02cc4f75535a3cd110d9d51fa64d9e84dedb474424b3" address="unix:///run/containerd/s/5ac432fc9da70727dc95d6520ef305eb9ca394e7ec4f1f370dec26dede8c3f48" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:19:25.556854 systemd[1]: Started cri-containerd-aa30803f85b8f2f2b215456fa9607e9c7f6742835d34c23cc1420a19fda69acc.scope - libcontainer container aa30803f85b8f2f2b215456fa9607e9c7f6742835d34c23cc1420a19fda69acc. Oct 27 08:19:25.559559 containerd[1611]: time="2025-10-27T08:19:25.559524276Z" level=info msg="connecting to shim 6a72207dbcf1f732df23590037ab49f8d1d2136f866ac8b5218075fca467ef9b" address="unix:///run/containerd/s/a5000b2c698819c7d655b14d6ad15bcf860eac947d5c94690dfb6097fce9fe74" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:19:25.727957 kubelet[2424]: E1027 08:19:25.727762 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Oct 27 08:19:25.727957 kubelet[2424]: I1027 08:19:25.727896 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:19:25.728382 kubelet[2424]: E1027 08:19:25.728349 2424 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Oct 27 08:19:25.730076 systemd[1]: Started cri-containerd-5456f8e496fe5f0671ed02cc4f75535a3cd110d9d51fa64d9e84dedb474424b3.scope - libcontainer container 5456f8e496fe5f0671ed02cc4f75535a3cd110d9d51fa64d9e84dedb474424b3. Oct 27 08:19:25.757664 systemd[1]: Started cri-containerd-6a72207dbcf1f732df23590037ab49f8d1d2136f866ac8b5218075fca467ef9b.scope - libcontainer container 6a72207dbcf1f732df23590037ab49f8d1d2136f866ac8b5218075fca467ef9b. Oct 27 08:19:25.783343 containerd[1611]: time="2025-10-27T08:19:25.783301116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e38e378303c90433dea04b89d602f831,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa30803f85b8f2f2b215456fa9607e9c7f6742835d34c23cc1420a19fda69acc\"" Oct 27 08:19:25.785247 kubelet[2424]: E1027 08:19:25.785207 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:25.792598 containerd[1611]: time="2025-10-27T08:19:25.792520555Z" level=info msg="CreateContainer within sandbox \"aa30803f85b8f2f2b215456fa9607e9c7f6742835d34c23cc1420a19fda69acc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 08:19:25.808671 containerd[1611]: time="2025-10-27T08:19:25.808615368Z" level=info msg="Container d03f58a0db311dd904e5f6521736e907e5efb4b6ad88f4bcbc0e45133b0261b2: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:19:25.819570 containerd[1611]: time="2025-10-27T08:19:25.819507002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5456f8e496fe5f0671ed02cc4f75535a3cd110d9d51fa64d9e84dedb474424b3\"" Oct 27 08:19:25.820362 kubelet[2424]: E1027 08:19:25.820320 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:25.820940 containerd[1611]: time="2025-10-27T08:19:25.820893230Z" level=info msg="CreateContainer within sandbox \"aa30803f85b8f2f2b215456fa9607e9c7f6742835d34c23cc1420a19fda69acc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d03f58a0db311dd904e5f6521736e907e5efb4b6ad88f4bcbc0e45133b0261b2\"" Oct 27 08:19:25.823498 containerd[1611]: time="2025-10-27T08:19:25.822958504Z" level=info msg="StartContainer for \"d03f58a0db311dd904e5f6521736e907e5efb4b6ad88f4bcbc0e45133b0261b2\"" Oct 27 08:19:25.824775 containerd[1611]: time="2025-10-27T08:19:25.824727794Z" level=info msg="connecting to shim d03f58a0db311dd904e5f6521736e907e5efb4b6ad88f4bcbc0e45133b0261b2" address="unix:///run/containerd/s/51456393ea79651b87f767da642237f18304f1d1d5a7e0c3c5b6863ef7a0e7e0" protocol=ttrpc version=3 Oct 27 08:19:25.827294 containerd[1611]: time="2025-10-27T08:19:25.827218611Z" level=info msg="CreateContainer within sandbox \"5456f8e496fe5f0671ed02cc4f75535a3cd110d9d51fa64d9e84dedb474424b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 08:19:25.834444 containerd[1611]: time="2025-10-27T08:19:25.834408224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a72207dbcf1f732df23590037ab49f8d1d2136f866ac8b5218075fca467ef9b\"" Oct 27 08:19:25.835712 kubelet[2424]: E1027 08:19:25.835683 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:25.840776 containerd[1611]: time="2025-10-27T08:19:25.840725671Z" level=info msg="CreateContainer within sandbox \"6a72207dbcf1f732df23590037ab49f8d1d2136f866ac8b5218075fca467ef9b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 08:19:25.843397 containerd[1611]: time="2025-10-27T08:19:25.843349082Z" level=info msg="Container 66c797c6ad9b3682404e786277d3c9e4c18e4d675ea4c266db967e2c4ba3b058: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:19:25.854633 systemd[1]: Started cri-containerd-d03f58a0db311dd904e5f6521736e907e5efb4b6ad88f4bcbc0e45133b0261b2.scope - libcontainer container d03f58a0db311dd904e5f6521736e907e5efb4b6ad88f4bcbc0e45133b0261b2. Oct 27 08:19:25.855777 containerd[1611]: time="2025-10-27T08:19:25.855734750Z" level=info msg="CreateContainer within sandbox \"5456f8e496fe5f0671ed02cc4f75535a3cd110d9d51fa64d9e84dedb474424b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"66c797c6ad9b3682404e786277d3c9e4c18e4d675ea4c266db967e2c4ba3b058\"" Oct 27 08:19:25.856516 containerd[1611]: time="2025-10-27T08:19:25.856249363Z" level=info msg="StartContainer for \"66c797c6ad9b3682404e786277d3c9e4c18e4d675ea4c266db967e2c4ba3b058\"" Oct 27 08:19:25.857614 containerd[1611]: time="2025-10-27T08:19:25.857369472Z" level=info msg="Container 6fe03b7736b7bc75c8bbbade365ed76f0637ea323d54666b3d5dfd1229a41886: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:19:25.857614 containerd[1611]: time="2025-10-27T08:19:25.857545097Z" level=info msg="connecting to shim 66c797c6ad9b3682404e786277d3c9e4c18e4d675ea4c266db967e2c4ba3b058" address="unix:///run/containerd/s/5ac432fc9da70727dc95d6520ef305eb9ca394e7ec4f1f370dec26dede8c3f48" protocol=ttrpc version=3 Oct 27 08:19:25.870773 containerd[1611]: time="2025-10-27T08:19:25.870730042Z" level=info msg="CreateContainer within sandbox \"6a72207dbcf1f732df23590037ab49f8d1d2136f866ac8b5218075fca467ef9b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6fe03b7736b7bc75c8bbbade365ed76f0637ea323d54666b3d5dfd1229a41886\"" Oct 27 08:19:25.872708 containerd[1611]: time="2025-10-27T08:19:25.872647163Z" level=info msg="StartContainer for \"6fe03b7736b7bc75c8bbbade365ed76f0637ea323d54666b3d5dfd1229a41886\"" Oct 27 08:19:25.874079 containerd[1611]: time="2025-10-27T08:19:25.873979829Z" level=info msg="connecting to shim 6fe03b7736b7bc75c8bbbade365ed76f0637ea323d54666b3d5dfd1229a41886" address="unix:///run/containerd/s/a5000b2c698819c7d655b14d6ad15bcf860eac947d5c94690dfb6097fce9fe74" protocol=ttrpc version=3 Oct 27 08:19:25.887616 systemd[1]: Started cri-containerd-66c797c6ad9b3682404e786277d3c9e4c18e4d675ea4c266db967e2c4ba3b058.scope - libcontainer container 66c797c6ad9b3682404e786277d3c9e4c18e4d675ea4c266db967e2c4ba3b058. Oct 27 08:19:25.902626 systemd[1]: Started cri-containerd-6fe03b7736b7bc75c8bbbade365ed76f0637ea323d54666b3d5dfd1229a41886.scope - libcontainer container 6fe03b7736b7bc75c8bbbade365ed76f0637ea323d54666b3d5dfd1229a41886. Oct 27 08:19:25.927993 containerd[1611]: time="2025-10-27T08:19:25.927945878Z" level=info msg="StartContainer for \"d03f58a0db311dd904e5f6521736e907e5efb4b6ad88f4bcbc0e45133b0261b2\" returns successfully" Oct 27 08:19:25.967524 containerd[1611]: time="2025-10-27T08:19:25.967147466Z" level=info msg="StartContainer for \"66c797c6ad9b3682404e786277d3c9e4c18e4d675ea4c266db967e2c4ba3b058\" returns successfully" Oct 27 08:19:25.981345 containerd[1611]: time="2025-10-27T08:19:25.981169970Z" level=info msg="StartContainer for \"6fe03b7736b7bc75c8bbbade365ed76f0637ea323d54666b3d5dfd1229a41886\" returns successfully" Oct 27 08:19:26.531582 kubelet[2424]: I1027 08:19:26.531417 2424 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:19:26.860157 kubelet[2424]: E1027 08:19:26.860018 2424 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:19:26.860157 kubelet[2424]: E1027 08:19:26.860151 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:26.862919 kubelet[2424]: E1027 08:19:26.862893 2424 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:19:26.863506 kubelet[2424]: E1027 08:19:26.863490 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:26.865896 kubelet[2424]: E1027 08:19:26.865842 2424 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 08:19:26.866101 kubelet[2424]: E1027 08:19:26.866051 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:27.276522 kubelet[2424]: I1027 08:19:27.275544 2424 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:19:27.276522 kubelet[2424]: E1027 08:19:27.275585 2424 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 27 08:19:27.289903 kubelet[2424]: I1027 08:19:27.289762 2424 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:27.299549 kubelet[2424]: E1027 08:19:27.299512 2424 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:27.299860 kubelet[2424]: I1027 08:19:27.299689 2424 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:27.301437 kubelet[2424]: E1027 08:19:27.301398 2424 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:27.301594 kubelet[2424]: I1027 08:19:27.301522 2424 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:27.302899 kubelet[2424]: E1027 08:19:27.302872 2424 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:27.305177 kubelet[2424]: I1027 08:19:27.305159 2424 apiserver.go:52] "Watching apiserver" Oct 27 08:19:27.389442 kubelet[2424]: I1027 08:19:27.389368 2424 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:19:27.868315 kubelet[2424]: I1027 08:19:27.868283 2424 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:27.868848 kubelet[2424]: I1027 08:19:27.868454 2424 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:27.870549 kubelet[2424]: E1027 08:19:27.870516 2424 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:27.870728 kubelet[2424]: E1027 08:19:27.870697 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:27.870764 kubelet[2424]: E1027 08:19:27.870741 2424 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:27.870990 kubelet[2424]: E1027 08:19:27.870945 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:27.871113 kubelet[2424]: I1027 08:19:27.871094 2424 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:27.872523 kubelet[2424]: E1027 08:19:27.872489 2424 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:27.872639 kubelet[2424]: E1027 08:19:27.872615 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:28.870828 kubelet[2424]: I1027 08:19:28.870775 2424 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:28.871349 kubelet[2424]: I1027 08:19:28.871230 2424 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:28.875012 kubelet[2424]: E1027 08:19:28.874977 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:28.877044 kubelet[2424]: E1027 08:19:28.877006 2424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:29.240668 systemd[1]: Reload requested from client PID 2714 ('systemctl') (unit session-9.scope)... Oct 27 08:19:29.240696 systemd[1]: Reloading... Oct 27 08:19:29.319547 zram_generator::config[2758]: No configuration found. Oct 27 08:19:29.582153 systemd[1]: Reloading finished in 341 ms. Oct 27 08:19:29.612654 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:19:29.641229 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 08:19:29.641603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:19:29.641689 systemd[1]: kubelet.service: Consumed 2.485s CPU time, 130.5M memory peak. Oct 27 08:19:29.643894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:19:29.948791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:19:29.959166 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:19:30.009500 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:19:30.009500 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:19:30.009500 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:19:30.009500 kubelet[2803]: I1027 08:19:30.008282 2803 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:19:30.018636 kubelet[2803]: I1027 08:19:30.018587 2803 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 27 08:19:30.018636 kubelet[2803]: I1027 08:19:30.018611 2803 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:19:30.018807 kubelet[2803]: I1027 08:19:30.018791 2803 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:19:30.019948 kubelet[2803]: I1027 08:19:30.019929 2803 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 27 08:19:30.022064 kubelet[2803]: I1027 08:19:30.022039 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:19:30.028521 kubelet[2803]: I1027 08:19:30.028375 2803 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:19:30.036352 kubelet[2803]: I1027 08:19:30.036315 2803 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 08:19:30.036641 kubelet[2803]: I1027 08:19:30.036594 2803 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:19:30.036850 kubelet[2803]: I1027 08:19:30.036629 2803 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:19:30.036954 kubelet[2803]: I1027 08:19:30.036870 2803 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:19:30.036954 kubelet[2803]: I1027 08:19:30.036884 2803 container_manager_linux.go:303] "Creating device plugin manager" Oct 27 08:19:30.036954 kubelet[2803]: I1027 08:19:30.036937 2803 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:19:30.037133 kubelet[2803]: I1027 08:19:30.037115 2803 kubelet.go:480] "Attempting to sync node with API server" Oct 27 08:19:30.037156 kubelet[2803]: I1027 08:19:30.037138 2803 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:19:30.037186 kubelet[2803]: I1027 08:19:30.037165 2803 kubelet.go:386] "Adding apiserver pod source" Oct 27 08:19:30.037186 kubelet[2803]: I1027 08:19:30.037183 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:19:30.040218 kubelet[2803]: I1027 08:19:30.039981 2803 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:19:30.040795 kubelet[2803]: I1027 08:19:30.040768 2803 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:19:30.048245 kubelet[2803]: I1027 08:19:30.048204 2803 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 08:19:30.048344 kubelet[2803]: I1027 08:19:30.048263 2803 server.go:1289] "Started kubelet" Oct 27 08:19:30.048811 kubelet[2803]: I1027 08:19:30.048748 2803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:19:30.049115 kubelet[2803]: I1027 08:19:30.048766 2803 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:19:30.050373 kubelet[2803]: I1027 08:19:30.050339 2803 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:19:30.053772 kubelet[2803]: I1027 08:19:30.053756 2803 server.go:317] "Adding debug handlers to kubelet server" Oct 27 08:19:30.054461 kubelet[2803]: I1027 08:19:30.054411 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:19:30.054538 kubelet[2803]: I1027 08:19:30.054465 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:19:30.056123 kubelet[2803]: I1027 08:19:30.056037 2803 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 08:19:30.058883 kubelet[2803]: E1027 08:19:30.058670 2803 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:19:30.059081 kubelet[2803]: I1027 08:19:30.059031 2803 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 08:19:30.059282 kubelet[2803]: I1027 08:19:30.059258 2803 reconciler.go:26] "Reconciler: start to sync state" Oct 27 08:19:30.059585 kubelet[2803]: I1027 08:19:30.059548 2803 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:19:30.059822 kubelet[2803]: I1027 08:19:30.059801 2803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:19:30.062170 kubelet[2803]: I1027 08:19:30.062153 2803 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:19:30.075046 kubelet[2803]: I1027 08:19:30.074962 2803 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 27 08:19:30.077406 kubelet[2803]: I1027 08:19:30.077369 2803 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 27 08:19:30.077406 kubelet[2803]: I1027 08:19:30.077398 2803 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 27 08:19:30.077538 kubelet[2803]: I1027 08:19:30.077437 2803 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:19:30.077538 kubelet[2803]: I1027 08:19:30.077448 2803 kubelet.go:2436] "Starting kubelet main sync loop" Oct 27 08:19:30.077639 kubelet[2803]: E1027 08:19:30.077540 2803 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:19:30.124820 kubelet[2803]: I1027 08:19:30.124773 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:19:30.124820 kubelet[2803]: I1027 08:19:30.124801 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:19:30.124820 kubelet[2803]: I1027 08:19:30.124826 2803 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:19:30.125052 kubelet[2803]: I1027 08:19:30.125015 2803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 08:19:30.125052 kubelet[2803]: I1027 08:19:30.125028 2803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 08:19:30.125052 kubelet[2803]: I1027 08:19:30.125049 2803 policy_none.go:49] "None policy: Start" Oct 27 08:19:30.125159 kubelet[2803]: I1027 08:19:30.125062 2803 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 08:19:30.125159 kubelet[2803]: I1027 08:19:30.125077 2803 state_mem.go:35] "Initializing new in-memory state store" Oct 27 08:19:30.125219 kubelet[2803]: I1027 08:19:30.125181 2803 state_mem.go:75] "Updated machine memory state" Oct 27 08:19:30.131086 kubelet[2803]: E1027 08:19:30.130822 2803 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:19:30.131086 kubelet[2803]: I1027 08:19:30.131044 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:19:30.131086 kubelet[2803]: I1027 08:19:30.131056 2803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:19:30.131412 kubelet[2803]: I1027 08:19:30.131368 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:19:30.134199 kubelet[2803]: E1027 08:19:30.134173 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:19:30.178988 kubelet[2803]: I1027 08:19:30.178934 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:30.179166 kubelet[2803]: I1027 08:19:30.179128 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:30.179309 kubelet[2803]: I1027 08:19:30.178934 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:30.184776 kubelet[2803]: E1027 08:19:30.184731 2803 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:30.184912 kubelet[2803]: E1027 08:19:30.184823 2803 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:30.240593 kubelet[2803]: I1027 08:19:30.240195 2803 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 08:19:30.247431 kubelet[2803]: I1027 08:19:30.247087 2803 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 08:19:30.247431 kubelet[2803]: I1027 08:19:30.247197 2803 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 08:19:30.260489 kubelet[2803]: I1027 08:19:30.260438 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e38e378303c90433dea04b89d602f831-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e38e378303c90433dea04b89d602f831\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:30.260736 kubelet[2803]: I1027 08:19:30.260502 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e38e378303c90433dea04b89d602f831-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e38e378303c90433dea04b89d602f831\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:30.260736 kubelet[2803]: I1027 08:19:30.260540 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:30.260736 kubelet[2803]: I1027 08:19:30.260559 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:30.260736 kubelet[2803]: I1027 08:19:30.260587 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:30.260736 kubelet[2803]: I1027 08:19:30.260602 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e38e378303c90433dea04b89d602f831-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e38e378303c90433dea04b89d602f831\") " pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:30.260924 kubelet[2803]: I1027 08:19:30.260618 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:30.260924 kubelet[2803]: I1027 08:19:30.260634 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:30.260924 kubelet[2803]: I1027 08:19:30.260650 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:30.485441 kubelet[2803]: E1027 08:19:30.485382 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:30.485441 kubelet[2803]: E1027 08:19:30.485408 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:30.485823 kubelet[2803]: E1027 08:19:30.485724 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:31.038612 kubelet[2803]: I1027 08:19:31.038556 2803 apiserver.go:52] "Watching apiserver" Oct 27 08:19:31.059443 kubelet[2803]: I1027 08:19:31.059393 2803 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 08:19:31.099811 kubelet[2803]: I1027 08:19:31.099771 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:31.100187 kubelet[2803]: I1027 08:19:31.100163 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:31.100580 kubelet[2803]: I1027 08:19:31.100549 2803 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:31.577705 kubelet[2803]: E1027 08:19:31.577267 2803 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 08:19:31.577705 kubelet[2803]: E1027 08:19:31.577574 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:31.578734 kubelet[2803]: E1027 08:19:31.578685 2803 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 08:19:31.578941 kubelet[2803]: E1027 08:19:31.578105 2803 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 27 08:19:31.579126 kubelet[2803]: E1027 08:19:31.579095 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:31.579226 kubelet[2803]: E1027 08:19:31.579196 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:31.591827 kubelet[2803]: I1027 08:19:31.591736 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.591651789 podStartE2EDuration="3.591651789s" podCreationTimestamp="2025-10-27 08:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:19:31.577335103 +0000 UTC m=+1.611970031" watchObservedRunningTime="2025-10-27 08:19:31.591651789 +0000 UTC m=+1.626286717" Oct 27 08:19:31.604018 kubelet[2803]: I1027 08:19:31.603948 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.603928041 podStartE2EDuration="3.603928041s" podCreationTimestamp="2025-10-27 08:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:19:31.591962459 +0000 UTC m=+1.626597397" watchObservedRunningTime="2025-10-27 08:19:31.603928041 +0000 UTC m=+1.638562969" Oct 27 08:19:31.621939 kubelet[2803]: I1027 08:19:31.621389 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.621371865 podStartE2EDuration="1.621371865s" podCreationTimestamp="2025-10-27 08:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:19:31.606594775 +0000 UTC m=+1.641229703" watchObservedRunningTime="2025-10-27 08:19:31.621371865 +0000 UTC m=+1.656006793" Oct 27 08:19:32.101447 kubelet[2803]: E1027 08:19:32.101407 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:32.101447 kubelet[2803]: E1027 08:19:32.101449 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:32.101935 kubelet[2803]: E1027 08:19:32.101592 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:32.347464 update_engine[1587]: I20251027 08:19:32.347366 1587 update_attempter.cc:509] Updating boot flags... Oct 27 08:19:33.242283 kubelet[2803]: E1027 08:19:33.242202 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:35.384605 kubelet[2803]: I1027 08:19:35.384559 2803 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 08:19:35.385104 kubelet[2803]: I1027 08:19:35.385090 2803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 08:19:35.385137 containerd[1611]: time="2025-10-27T08:19:35.384892007Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 08:19:35.947298 kubelet[2803]: E1027 08:19:35.947257 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:36.107120 kubelet[2803]: E1027 08:19:36.107084 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:36.393158 systemd[1]: Created slice kubepods-besteffort-poded9cf6c7_7467_4458_b65b_09c798ba1ff6.slice - libcontainer container kubepods-besteffort-poded9cf6c7_7467_4458_b65b_09c798ba1ff6.slice. Oct 27 08:19:36.395126 kubelet[2803]: I1027 08:19:36.395081 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed9cf6c7-7467-4458-b65b-09c798ba1ff6-xtables-lock\") pod \"kube-proxy-grtwd\" (UID: \"ed9cf6c7-7467-4458-b65b-09c798ba1ff6\") " pod="kube-system/kube-proxy-grtwd" Oct 27 08:19:36.395126 kubelet[2803]: I1027 08:19:36.395112 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed9cf6c7-7467-4458-b65b-09c798ba1ff6-kube-proxy\") pod \"kube-proxy-grtwd\" (UID: \"ed9cf6c7-7467-4458-b65b-09c798ba1ff6\") " pod="kube-system/kube-proxy-grtwd" Oct 27 08:19:36.395512 kubelet[2803]: I1027 08:19:36.395127 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed9cf6c7-7467-4458-b65b-09c798ba1ff6-lib-modules\") pod \"kube-proxy-grtwd\" (UID: \"ed9cf6c7-7467-4458-b65b-09c798ba1ff6\") " pod="kube-system/kube-proxy-grtwd" Oct 27 08:19:36.395512 kubelet[2803]: I1027 08:19:36.395154 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s74z\" (UniqueName: \"kubernetes.io/projected/ed9cf6c7-7467-4458-b65b-09c798ba1ff6-kube-api-access-7s74z\") pod \"kube-proxy-grtwd\" (UID: \"ed9cf6c7-7467-4458-b65b-09c798ba1ff6\") " pod="kube-system/kube-proxy-grtwd" Oct 27 08:19:36.461976 systemd[1]: Created slice kubepods-besteffort-podcec42425_4371_4c22_9f03_fa62023131a0.slice - libcontainer container kubepods-besteffort-podcec42425_4371_4c22_9f03_fa62023131a0.slice. Oct 27 08:19:36.495738 kubelet[2803]: I1027 08:19:36.495680 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zskf6\" (UniqueName: \"kubernetes.io/projected/cec42425-4371-4c22-9f03-fa62023131a0-kube-api-access-zskf6\") pod \"tigera-operator-7dcd859c48-4v6pq\" (UID: \"cec42425-4371-4c22-9f03-fa62023131a0\") " pod="tigera-operator/tigera-operator-7dcd859c48-4v6pq" Oct 27 08:19:36.495877 kubelet[2803]: I1027 08:19:36.495838 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cec42425-4371-4c22-9f03-fa62023131a0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-4v6pq\" (UID: \"cec42425-4371-4c22-9f03-fa62023131a0\") " pod="tigera-operator/tigera-operator-7dcd859c48-4v6pq" Oct 27 08:19:36.702778 kubelet[2803]: E1027 08:19:36.702300 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:36.703673 containerd[1611]: time="2025-10-27T08:19:36.703441275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grtwd,Uid:ed9cf6c7-7467-4458-b65b-09c798ba1ff6,Namespace:kube-system,Attempt:0,}" Oct 27 08:19:36.749354 containerd[1611]: time="2025-10-27T08:19:36.749278198Z" level=info msg="connecting to shim 9fd7fadd9dd9bfd8b4166a64c44a54954662c54e014644f3b1a67678003abf43" address="unix:///run/containerd/s/51f2d75e512b558bd1d5b67a2e7a9f8251704998b5d1ca0948442c8a007b1a98" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:19:36.766051 containerd[1611]: time="2025-10-27T08:19:36.765871079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4v6pq,Uid:cec42425-4371-4c22-9f03-fa62023131a0,Namespace:tigera-operator,Attempt:0,}" Oct 27 08:19:36.788618 containerd[1611]: time="2025-10-27T08:19:36.788270699Z" level=info msg="connecting to shim a539089ea9a59980e4a462a520f353a938ec133186c9c83e7e4086d0b4865d94" address="unix:///run/containerd/s/85485aa88d099ae0bf9b0b74773a27864d4000f444d71406dc1789d6a70cfcf6" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:19:36.820650 systemd[1]: Started cri-containerd-9fd7fadd9dd9bfd8b4166a64c44a54954662c54e014644f3b1a67678003abf43.scope - libcontainer container 9fd7fadd9dd9bfd8b4166a64c44a54954662c54e014644f3b1a67678003abf43. Oct 27 08:19:36.824919 systemd[1]: Started cri-containerd-a539089ea9a59980e4a462a520f353a938ec133186c9c83e7e4086d0b4865d94.scope - libcontainer container a539089ea9a59980e4a462a520f353a938ec133186c9c83e7e4086d0b4865d94. Oct 27 08:19:36.851670 containerd[1611]: time="2025-10-27T08:19:36.851626505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grtwd,Uid:ed9cf6c7-7467-4458-b65b-09c798ba1ff6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fd7fadd9dd9bfd8b4166a64c44a54954662c54e014644f3b1a67678003abf43\"" Oct 27 08:19:36.852297 kubelet[2803]: E1027 08:19:36.852256 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:36.857555 containerd[1611]: time="2025-10-27T08:19:36.857509618Z" level=info msg="CreateContainer within sandbox \"9fd7fadd9dd9bfd8b4166a64c44a54954662c54e014644f3b1a67678003abf43\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 08:19:36.874838 containerd[1611]: time="2025-10-27T08:19:36.874749634Z" level=info msg="Container 1d284f3d8be824469f1b287444a68c70e6890fbf9b67c839ea62b4f4340efb38: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:19:36.876899 containerd[1611]: time="2025-10-27T08:19:36.876858535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4v6pq,Uid:cec42425-4371-4c22-9f03-fa62023131a0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a539089ea9a59980e4a462a520f353a938ec133186c9c83e7e4086d0b4865d94\"" Oct 27 08:19:36.879725 containerd[1611]: time="2025-10-27T08:19:36.879682258Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 08:19:36.890881 containerd[1611]: time="2025-10-27T08:19:36.890815416Z" level=info msg="CreateContainer within sandbox \"9fd7fadd9dd9bfd8b4166a64c44a54954662c54e014644f3b1a67678003abf43\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d284f3d8be824469f1b287444a68c70e6890fbf9b67c839ea62b4f4340efb38\"" Oct 27 08:19:36.891454 containerd[1611]: time="2025-10-27T08:19:36.891411806Z" level=info msg="StartContainer for \"1d284f3d8be824469f1b287444a68c70e6890fbf9b67c839ea62b4f4340efb38\"" Oct 27 08:19:36.893164 containerd[1611]: time="2025-10-27T08:19:36.893127132Z" level=info msg="connecting to shim 1d284f3d8be824469f1b287444a68c70e6890fbf9b67c839ea62b4f4340efb38" address="unix:///run/containerd/s/51f2d75e512b558bd1d5b67a2e7a9f8251704998b5d1ca0948442c8a007b1a98" protocol=ttrpc version=3 Oct 27 08:19:36.917624 systemd[1]: Started cri-containerd-1d284f3d8be824469f1b287444a68c70e6890fbf9b67c839ea62b4f4340efb38.scope - libcontainer container 1d284f3d8be824469f1b287444a68c70e6890fbf9b67c839ea62b4f4340efb38. Oct 27 08:19:36.966186 containerd[1611]: time="2025-10-27T08:19:36.966078828Z" level=info msg="StartContainer for \"1d284f3d8be824469f1b287444a68c70e6890fbf9b67c839ea62b4f4340efb38\" returns successfully" Oct 27 08:19:37.113646 kubelet[2803]: E1027 08:19:37.113593 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:37.114169 kubelet[2803]: E1027 08:19:37.114147 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:37.132115 kubelet[2803]: I1027 08:19:37.132003 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-grtwd" podStartSLOduration=1.131974983 podStartE2EDuration="1.131974983s" podCreationTimestamp="2025-10-27 08:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:19:37.130901914 +0000 UTC m=+7.165536832" watchObservedRunningTime="2025-10-27 08:19:37.131974983 +0000 UTC m=+7.166609931" Oct 27 08:19:38.289237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224693194.mount: Deactivated successfully. Oct 27 08:19:38.767418 containerd[1611]: time="2025-10-27T08:19:38.767291910Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:38.768105 containerd[1611]: time="2025-10-27T08:19:38.768050082Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 27 08:19:38.769306 containerd[1611]: time="2025-10-27T08:19:38.769265890Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:38.771759 containerd[1611]: time="2025-10-27T08:19:38.771721402Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:38.772463 containerd[1611]: time="2025-10-27T08:19:38.772421767Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.892690354s" Oct 27 08:19:38.772576 containerd[1611]: time="2025-10-27T08:19:38.772458706Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 27 08:19:38.776067 containerd[1611]: time="2025-10-27T08:19:38.776023866Z" level=info msg="CreateContainer within sandbox \"a539089ea9a59980e4a462a520f353a938ec133186c9c83e7e4086d0b4865d94\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 08:19:38.783864 containerd[1611]: time="2025-10-27T08:19:38.783820805Z" level=info msg="Container f78d04adcde9e4d9fb1b2e5a6235e5c443ebb116d70d9743674d3cdebcd542b1: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:19:38.790138 containerd[1611]: time="2025-10-27T08:19:38.790073955Z" level=info msg="CreateContainer within sandbox \"a539089ea9a59980e4a462a520f353a938ec133186c9c83e7e4086d0b4865d94\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f78d04adcde9e4d9fb1b2e5a6235e5c443ebb116d70d9743674d3cdebcd542b1\"" Oct 27 08:19:38.790710 containerd[1611]: time="2025-10-27T08:19:38.790660184Z" level=info msg="StartContainer for \"f78d04adcde9e4d9fb1b2e5a6235e5c443ebb116d70d9743674d3cdebcd542b1\"" Oct 27 08:19:38.791454 containerd[1611]: time="2025-10-27T08:19:38.791423296Z" level=info msg="connecting to shim f78d04adcde9e4d9fb1b2e5a6235e5c443ebb116d70d9743674d3cdebcd542b1" address="unix:///run/containerd/s/85485aa88d099ae0bf9b0b74773a27864d4000f444d71406dc1789d6a70cfcf6" protocol=ttrpc version=3 Oct 27 08:19:38.821900 systemd[1]: Started cri-containerd-f78d04adcde9e4d9fb1b2e5a6235e5c443ebb116d70d9743674d3cdebcd542b1.scope - libcontainer container f78d04adcde9e4d9fb1b2e5a6235e5c443ebb116d70d9743674d3cdebcd542b1. Oct 27 08:19:38.855329 containerd[1611]: time="2025-10-27T08:19:38.855277353Z" level=info msg="StartContainer for \"f78d04adcde9e4d9fb1b2e5a6235e5c443ebb116d70d9743674d3cdebcd542b1\" returns successfully" Oct 27 08:19:39.124890 kubelet[2803]: I1027 08:19:39.124806 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-4v6pq" podStartSLOduration=1.230706863 podStartE2EDuration="3.12478853s" podCreationTimestamp="2025-10-27 08:19:36 +0000 UTC" firstStartedPulling="2025-10-27 08:19:36.879163516 +0000 UTC m=+6.913798444" lastFinishedPulling="2025-10-27 08:19:38.773245183 +0000 UTC m=+8.807880111" observedRunningTime="2025-10-27 08:19:39.124275191 +0000 UTC m=+9.158910119" watchObservedRunningTime="2025-10-27 08:19:39.12478853 +0000 UTC m=+9.159423458" Oct 27 08:19:40.099811 kubelet[2803]: E1027 08:19:40.099746 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:40.123012 kubelet[2803]: E1027 08:19:40.122964 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:41.126735 kubelet[2803]: E1027 08:19:41.126670 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:43.247193 kubelet[2803]: E1027 08:19:43.247152 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:44.687688 sudo[1842]: pam_unix(sudo:session): session closed for user root Oct 27 08:19:44.689717 sshd[1841]: Connection closed by 10.0.0.1 port 51976 Oct 27 08:19:44.691162 sshd-session[1838]: pam_unix(sshd:session): session closed for user core Oct 27 08:19:44.701008 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Oct 27 08:19:44.701852 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:51976.service: Deactivated successfully. Oct 27 08:19:44.709899 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 08:19:44.711660 systemd[1]: session-9.scope: Consumed 6.105s CPU time, 219.8M memory peak. Oct 27 08:19:44.719023 systemd-logind[1586]: Removed session 9. Oct 27 08:19:49.377794 kubelet[2803]: I1027 08:19:49.377665 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a9ae861-1c8d-438c-9f1e-445cbfdafbf1-tigera-ca-bundle\") pod \"calico-typha-6557cd4f66-5k6zq\" (UID: \"3a9ae861-1c8d-438c-9f1e-445cbfdafbf1\") " pod="calico-system/calico-typha-6557cd4f66-5k6zq" Oct 27 08:19:49.377794 kubelet[2803]: I1027 08:19:49.377717 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3a9ae861-1c8d-438c-9f1e-445cbfdafbf1-typha-certs\") pod \"calico-typha-6557cd4f66-5k6zq\" (UID: \"3a9ae861-1c8d-438c-9f1e-445cbfdafbf1\") " pod="calico-system/calico-typha-6557cd4f66-5k6zq" Oct 27 08:19:49.377794 kubelet[2803]: I1027 08:19:49.377748 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6dmd\" (UniqueName: \"kubernetes.io/projected/3a9ae861-1c8d-438c-9f1e-445cbfdafbf1-kube-api-access-r6dmd\") pod \"calico-typha-6557cd4f66-5k6zq\" (UID: \"3a9ae861-1c8d-438c-9f1e-445cbfdafbf1\") " pod="calico-system/calico-typha-6557cd4f66-5k6zq" Oct 27 08:19:49.384816 systemd[1]: Created slice kubepods-besteffort-pod3a9ae861_1c8d_438c_9f1e_445cbfdafbf1.slice - libcontainer container kubepods-besteffort-pod3a9ae861_1c8d_438c_9f1e_445cbfdafbf1.slice. Oct 27 08:19:49.630187 systemd[1]: Created slice kubepods-besteffort-pod18fd271f_3f98_4f7c_9e82_edb5b9e31897.slice - libcontainer container kubepods-besteffort-pod18fd271f_3f98_4f7c_9e82_edb5b9e31897.slice. Oct 27 08:19:49.681166 kubelet[2803]: I1027 08:19:49.681079 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/18fd271f-3f98-4f7c-9e82-edb5b9e31897-flexvol-driver-host\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681166 kubelet[2803]: I1027 08:19:49.681146 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18fd271f-3f98-4f7c-9e82-edb5b9e31897-lib-modules\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681405 kubelet[2803]: I1027 08:19:49.681184 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/18fd271f-3f98-4f7c-9e82-edb5b9e31897-policysync\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681405 kubelet[2803]: I1027 08:19:49.681220 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/18fd271f-3f98-4f7c-9e82-edb5b9e31897-var-run-calico\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681405 kubelet[2803]: I1027 08:19:49.681284 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/18fd271f-3f98-4f7c-9e82-edb5b9e31897-cni-net-dir\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681405 kubelet[2803]: I1027 08:19:49.681320 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rwd5\" (UniqueName: \"kubernetes.io/projected/18fd271f-3f98-4f7c-9e82-edb5b9e31897-kube-api-access-9rwd5\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681405 kubelet[2803]: I1027 08:19:49.681355 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18fd271f-3f98-4f7c-9e82-edb5b9e31897-tigera-ca-bundle\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681600 kubelet[2803]: I1027 08:19:49.681405 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18fd271f-3f98-4f7c-9e82-edb5b9e31897-xtables-lock\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681600 kubelet[2803]: I1027 08:19:49.681442 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/18fd271f-3f98-4f7c-9e82-edb5b9e31897-cni-bin-dir\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681600 kubelet[2803]: I1027 08:19:49.681504 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/18fd271f-3f98-4f7c-9e82-edb5b9e31897-cni-log-dir\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681600 kubelet[2803]: I1027 08:19:49.681537 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/18fd271f-3f98-4f7c-9e82-edb5b9e31897-node-certs\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.681600 kubelet[2803]: I1027 08:19:49.681569 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/18fd271f-3f98-4f7c-9e82-edb5b9e31897-var-lib-calico\") pod \"calico-node-v97cq\" (UID: \"18fd271f-3f98-4f7c-9e82-edb5b9e31897\") " pod="calico-system/calico-node-v97cq" Oct 27 08:19:49.694377 kubelet[2803]: E1027 08:19:49.694322 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:49.695070 containerd[1611]: time="2025-10-27T08:19:49.695018073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6557cd4f66-5k6zq,Uid:3a9ae861-1c8d-438c-9f1e-445cbfdafbf1,Namespace:calico-system,Attempt:0,}" Oct 27 08:19:49.724175 containerd[1611]: time="2025-10-27T08:19:49.724113372Z" level=info msg="connecting to shim 847f4f6b9b8009a457778aa488bd7ccfd956ddc58debbfb39176862e4a912e89" address="unix:///run/containerd/s/8e26ee275b3c42f9d105e40b77c058500feee72641a83dee18376f0b3f260ae0" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:19:49.752683 systemd[1]: Started cri-containerd-847f4f6b9b8009a457778aa488bd7ccfd956ddc58debbfb39176862e4a912e89.scope - libcontainer container 847f4f6b9b8009a457778aa488bd7ccfd956ddc58debbfb39176862e4a912e89. Oct 27 08:19:49.787644 kubelet[2803]: E1027 08:19:49.787557 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.787644 kubelet[2803]: W1027 08:19:49.787583 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.824239 kubelet[2803]: E1027 08:19:49.824166 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.824668 kubelet[2803]: E1027 08:19:49.824633 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.824668 kubelet[2803]: W1027 08:19:49.824654 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.824731 kubelet[2803]: E1027 08:19:49.824675 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.824932 kubelet[2803]: E1027 08:19:49.824907 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.824932 kubelet[2803]: W1027 08:19:49.824920 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.824932 kubelet[2803]: E1027 08:19:49.824928 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.849886 containerd[1611]: time="2025-10-27T08:19:49.849835834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6557cd4f66-5k6zq,Uid:3a9ae861-1c8d-438c-9f1e-445cbfdafbf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"847f4f6b9b8009a457778aa488bd7ccfd956ddc58debbfb39176862e4a912e89\"" Oct 27 08:19:49.872513 kubelet[2803]: E1027 08:19:49.872336 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:49.873921 containerd[1611]: time="2025-10-27T08:19:49.873750659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 08:19:49.885074 kubelet[2803]: E1027 08:19:49.884963 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.885074 kubelet[2803]: W1027 08:19:49.884999 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.885074 kubelet[2803]: E1027 08:19:49.885023 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.906881 kubelet[2803]: E1027 08:19:49.906674 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:19:49.935511 kubelet[2803]: E1027 08:19:49.934818 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:49.935941 containerd[1611]: time="2025-10-27T08:19:49.935896042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v97cq,Uid:18fd271f-3f98-4f7c-9e82-edb5b9e31897,Namespace:calico-system,Attempt:0,}" Oct 27 08:19:49.968536 containerd[1611]: time="2025-10-27T08:19:49.968447979Z" level=info msg="connecting to shim b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122" address="unix:///run/containerd/s/4678550512bd1c27598c2c8a948cdaebbfa3ba79b9f7b2e446456889a9a60cb9" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:19:49.979792 kubelet[2803]: E1027 08:19:49.979754 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.979892 kubelet[2803]: W1027 08:19:49.979783 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.979892 kubelet[2803]: E1027 08:19:49.979828 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.980122 kubelet[2803]: E1027 08:19:49.980100 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.980122 kubelet[2803]: W1027 08:19:49.980116 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.980122 kubelet[2803]: E1027 08:19:49.980126 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.980418 kubelet[2803]: E1027 08:19:49.980388 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.980418 kubelet[2803]: W1027 08:19:49.980410 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.980418 kubelet[2803]: E1027 08:19:49.980419 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.980802 kubelet[2803]: E1027 08:19:49.980780 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.980802 kubelet[2803]: W1027 08:19:49.980794 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.980802 kubelet[2803]: E1027 08:19:49.980803 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.981105 kubelet[2803]: E1027 08:19:49.981082 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.981105 kubelet[2803]: W1027 08:19:49.981096 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.981105 kubelet[2803]: E1027 08:19:49.981105 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.981410 kubelet[2803]: E1027 08:19:49.981368 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.981410 kubelet[2803]: W1027 08:19:49.981390 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.981410 kubelet[2803]: E1027 08:19:49.981399 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.981678 kubelet[2803]: E1027 08:19:49.981634 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.981678 kubelet[2803]: W1027 08:19:49.981657 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.981678 kubelet[2803]: E1027 08:19:49.981666 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.981934 kubelet[2803]: E1027 08:19:49.981893 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.981934 kubelet[2803]: W1027 08:19:49.981905 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.981934 kubelet[2803]: E1027 08:19:49.981915 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.982277 kubelet[2803]: E1027 08:19:49.982250 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.982277 kubelet[2803]: W1027 08:19:49.982267 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.982277 kubelet[2803]: E1027 08:19:49.982277 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.982561 kubelet[2803]: E1027 08:19:49.982539 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.982561 kubelet[2803]: W1027 08:19:49.982560 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.982677 kubelet[2803]: E1027 08:19:49.982586 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.982853 kubelet[2803]: E1027 08:19:49.982831 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.982853 kubelet[2803]: W1027 08:19:49.982847 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.982947 kubelet[2803]: E1027 08:19:49.982859 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.983189 kubelet[2803]: E1027 08:19:49.983171 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.983281 kubelet[2803]: W1027 08:19:49.983193 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.983281 kubelet[2803]: E1027 08:19:49.983202 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.983431 kubelet[2803]: E1027 08:19:49.983416 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.983431 kubelet[2803]: W1027 08:19:49.983428 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.983431 kubelet[2803]: E1027 08:19:49.983436 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.983681 kubelet[2803]: E1027 08:19:49.983637 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.983681 kubelet[2803]: W1027 08:19:49.983651 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.983681 kubelet[2803]: E1027 08:19:49.983661 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.984017 kubelet[2803]: E1027 08:19:49.983984 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.984017 kubelet[2803]: W1027 08:19:49.983997 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.984017 kubelet[2803]: E1027 08:19:49.984007 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.984253 kubelet[2803]: E1027 08:19:49.984229 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.984253 kubelet[2803]: W1027 08:19:49.984246 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.984358 kubelet[2803]: E1027 08:19:49.984259 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.984544 kubelet[2803]: E1027 08:19:49.984522 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.984544 kubelet[2803]: W1027 08:19:49.984536 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.984544 kubelet[2803]: E1027 08:19:49.984546 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.984780 kubelet[2803]: E1027 08:19:49.984759 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.984780 kubelet[2803]: W1027 08:19:49.984773 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.984780 kubelet[2803]: E1027 08:19:49.984783 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.985015 kubelet[2803]: E1027 08:19:49.984993 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.985015 kubelet[2803]: W1027 08:19:49.985007 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.985015 kubelet[2803]: E1027 08:19:49.985017 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.985236 kubelet[2803]: E1027 08:19:49.985213 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.985236 kubelet[2803]: W1027 08:19:49.985227 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.985236 kubelet[2803]: E1027 08:19:49.985239 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.985661 kubelet[2803]: E1027 08:19:49.985636 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.985661 kubelet[2803]: W1027 08:19:49.985652 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.985661 kubelet[2803]: E1027 08:19:49.985663 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.985801 kubelet[2803]: I1027 08:19:49.985705 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db7b4\" (UniqueName: \"kubernetes.io/projected/35ccb1c2-1c56-4133-a090-83b933f5454f-kube-api-access-db7b4\") pod \"csi-node-driver-r2dvr\" (UID: \"35ccb1c2-1c56-4133-a090-83b933f5454f\") " pod="calico-system/csi-node-driver-r2dvr" Oct 27 08:19:49.986524 kubelet[2803]: E1027 08:19:49.985995 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.986524 kubelet[2803]: W1027 08:19:49.986014 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.986524 kubelet[2803]: E1027 08:19:49.986026 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.986524 kubelet[2803]: I1027 08:19:49.986051 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35ccb1c2-1c56-4133-a090-83b933f5454f-kubelet-dir\") pod \"csi-node-driver-r2dvr\" (UID: \"35ccb1c2-1c56-4133-a090-83b933f5454f\") " pod="calico-system/csi-node-driver-r2dvr" Oct 27 08:19:49.986524 kubelet[2803]: E1027 08:19:49.986269 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.986524 kubelet[2803]: W1027 08:19:49.986280 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.986524 kubelet[2803]: E1027 08:19:49.986290 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.986524 kubelet[2803]: I1027 08:19:49.986317 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/35ccb1c2-1c56-4133-a090-83b933f5454f-varrun\") pod \"csi-node-driver-r2dvr\" (UID: \"35ccb1c2-1c56-4133-a090-83b933f5454f\") " pod="calico-system/csi-node-driver-r2dvr" Oct 27 08:19:49.986894 kubelet[2803]: E1027 08:19:49.986785 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.986894 kubelet[2803]: W1027 08:19:49.986812 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.986894 kubelet[2803]: E1027 08:19:49.986843 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.987425 kubelet[2803]: E1027 08:19:49.987399 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.987425 kubelet[2803]: W1027 08:19:49.987418 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.987562 kubelet[2803]: E1027 08:19:49.987431 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.987791 kubelet[2803]: E1027 08:19:49.987768 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.987791 kubelet[2803]: W1027 08:19:49.987787 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.987883 kubelet[2803]: E1027 08:19:49.987801 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.988182 kubelet[2803]: I1027 08:19:49.987900 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/35ccb1c2-1c56-4133-a090-83b933f5454f-registration-dir\") pod \"csi-node-driver-r2dvr\" (UID: \"35ccb1c2-1c56-4133-a090-83b933f5454f\") " pod="calico-system/csi-node-driver-r2dvr" Oct 27 08:19:49.988264 kubelet[2803]: E1027 08:19:49.988237 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.988264 kubelet[2803]: W1027 08:19:49.988252 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.988264 kubelet[2803]: E1027 08:19:49.988263 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.988666 kubelet[2803]: E1027 08:19:49.988642 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.988666 kubelet[2803]: W1027 08:19:49.988659 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.988763 kubelet[2803]: E1027 08:19:49.988672 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.989066 kubelet[2803]: E1027 08:19:49.989037 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.989066 kubelet[2803]: W1027 08:19:49.989062 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.989146 kubelet[2803]: E1027 08:19:49.989077 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.989316 kubelet[2803]: E1027 08:19:49.989291 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.989316 kubelet[2803]: W1027 08:19:49.989306 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.989316 kubelet[2803]: E1027 08:19:49.989317 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.990630 kubelet[2803]: E1027 08:19:49.990596 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.990630 kubelet[2803]: W1027 08:19:49.990613 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.990630 kubelet[2803]: E1027 08:19:49.990624 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.990899 kubelet[2803]: E1027 08:19:49.990867 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.990899 kubelet[2803]: W1027 08:19:49.990881 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.990899 kubelet[2803]: E1027 08:19:49.990890 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.991213 kubelet[2803]: E1027 08:19:49.991187 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.991213 kubelet[2803]: W1027 08:19:49.991205 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.991293 kubelet[2803]: E1027 08:19:49.991217 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.991293 kubelet[2803]: I1027 08:19:49.991239 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/35ccb1c2-1c56-4133-a090-83b933f5454f-socket-dir\") pod \"csi-node-driver-r2dvr\" (UID: \"35ccb1c2-1c56-4133-a090-83b933f5454f\") " pod="calico-system/csi-node-driver-r2dvr" Oct 27 08:19:49.991561 kubelet[2803]: E1027 08:19:49.991538 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.991561 kubelet[2803]: W1027 08:19:49.991552 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.991561 kubelet[2803]: E1027 08:19:49.991562 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:49.991832 kubelet[2803]: E1027 08:19:49.991809 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:49.991832 kubelet[2803]: W1027 08:19:49.991821 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:49.991832 kubelet[2803]: E1027 08:19:49.991830 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.004775 systemd[1]: Started cri-containerd-b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122.scope - libcontainer container b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122. Oct 27 08:19:50.040281 containerd[1611]: time="2025-10-27T08:19:50.040225489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v97cq,Uid:18fd271f-3f98-4f7c-9e82-edb5b9e31897,Namespace:calico-system,Attempt:0,} returns sandbox id \"b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122\"" Oct 27 08:19:50.041311 kubelet[2803]: E1027 08:19:50.041260 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:50.092197 kubelet[2803]: E1027 08:19:50.092134 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.092197 kubelet[2803]: W1027 08:19:50.092159 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.092197 kubelet[2803]: E1027 08:19:50.092185 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.092444 kubelet[2803]: E1027 08:19:50.092417 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.092444 kubelet[2803]: W1027 08:19:50.092426 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.092444 kubelet[2803]: E1027 08:19:50.092435 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.092737 kubelet[2803]: E1027 08:19:50.092705 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.092737 kubelet[2803]: W1027 08:19:50.092725 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.092834 kubelet[2803]: E1027 08:19:50.092739 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.093015 kubelet[2803]: E1027 08:19:50.092986 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.093015 kubelet[2803]: W1027 08:19:50.093000 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.093015 kubelet[2803]: E1027 08:19:50.093011 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.093254 kubelet[2803]: E1027 08:19:50.093235 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.093254 kubelet[2803]: W1027 08:19:50.093251 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.093341 kubelet[2803]: E1027 08:19:50.093263 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.093457 kubelet[2803]: E1027 08:19:50.093435 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.093457 kubelet[2803]: W1027 08:19:50.093446 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.093457 kubelet[2803]: E1027 08:19:50.093456 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.093654 kubelet[2803]: E1027 08:19:50.093638 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.093654 kubelet[2803]: W1027 08:19:50.093649 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.093724 kubelet[2803]: E1027 08:19:50.093659 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.093887 kubelet[2803]: E1027 08:19:50.093871 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.093887 kubelet[2803]: W1027 08:19:50.093882 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.093968 kubelet[2803]: E1027 08:19:50.093892 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.094085 kubelet[2803]: E1027 08:19:50.094068 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.094085 kubelet[2803]: W1027 08:19:50.094079 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.094177 kubelet[2803]: E1027 08:19:50.094088 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.094435 kubelet[2803]: E1027 08:19:50.094419 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.094435 kubelet[2803]: W1027 08:19:50.094431 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.094534 kubelet[2803]: E1027 08:19:50.094443 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.094680 kubelet[2803]: E1027 08:19:50.094664 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.094680 kubelet[2803]: W1027 08:19:50.094675 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.094680 kubelet[2803]: E1027 08:19:50.094686 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.094882 kubelet[2803]: E1027 08:19:50.094868 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.094882 kubelet[2803]: W1027 08:19:50.094877 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.094957 kubelet[2803]: E1027 08:19:50.094886 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.095107 kubelet[2803]: E1027 08:19:50.095089 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.095107 kubelet[2803]: W1027 08:19:50.095101 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.095213 kubelet[2803]: E1027 08:19:50.095113 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.095353 kubelet[2803]: E1027 08:19:50.095330 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.095353 kubelet[2803]: W1027 08:19:50.095346 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.095431 kubelet[2803]: E1027 08:19:50.095360 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.095723 kubelet[2803]: E1027 08:19:50.095694 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.095723 kubelet[2803]: W1027 08:19:50.095706 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.095723 kubelet[2803]: E1027 08:19:50.095718 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.095969 kubelet[2803]: E1027 08:19:50.095918 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.095969 kubelet[2803]: W1027 08:19:50.095934 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.095969 kubelet[2803]: E1027 08:19:50.095946 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.096491 kubelet[2803]: E1027 08:19:50.096166 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.096491 kubelet[2803]: W1027 08:19:50.096180 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.096491 kubelet[2803]: E1027 08:19:50.096191 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.096491 kubelet[2803]: E1027 08:19:50.096401 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.096491 kubelet[2803]: W1027 08:19:50.096413 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.096491 kubelet[2803]: E1027 08:19:50.096424 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.096717 kubelet[2803]: E1027 08:19:50.096646 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.096717 kubelet[2803]: W1027 08:19:50.096657 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.096717 kubelet[2803]: E1027 08:19:50.096668 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.096900 kubelet[2803]: E1027 08:19:50.096879 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.096900 kubelet[2803]: W1027 08:19:50.096893 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.096988 kubelet[2803]: E1027 08:19:50.096913 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.097155 kubelet[2803]: E1027 08:19:50.097134 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.097155 kubelet[2803]: W1027 08:19:50.097147 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.097251 kubelet[2803]: E1027 08:19:50.097159 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.097375 kubelet[2803]: E1027 08:19:50.097359 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.097375 kubelet[2803]: W1027 08:19:50.097371 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.097509 kubelet[2803]: E1027 08:19:50.097383 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.097661 kubelet[2803]: E1027 08:19:50.097640 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.097661 kubelet[2803]: W1027 08:19:50.097657 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.097747 kubelet[2803]: E1027 08:19:50.097671 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.097903 kubelet[2803]: E1027 08:19:50.097887 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.097903 kubelet[2803]: W1027 08:19:50.097898 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.098004 kubelet[2803]: E1027 08:19:50.097909 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.098139 kubelet[2803]: E1027 08:19:50.098124 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.098139 kubelet[2803]: W1027 08:19:50.098135 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.098214 kubelet[2803]: E1027 08:19:50.098145 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:50.103244 kubelet[2803]: E1027 08:19:50.103215 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:50.103244 kubelet[2803]: W1027 08:19:50.103235 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:50.103379 kubelet[2803]: E1027 08:19:50.103257 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:51.077936 kubelet[2803]: E1027 08:19:51.077887 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:19:51.941904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959692179.mount: Deactivated successfully. Oct 27 08:19:52.288966 containerd[1611]: time="2025-10-27T08:19:52.288898221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:52.289642 containerd[1611]: time="2025-10-27T08:19:52.289612104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 27 08:19:52.290673 containerd[1611]: time="2025-10-27T08:19:52.290624017Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:52.292497 containerd[1611]: time="2025-10-27T08:19:52.292455483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:52.292946 containerd[1611]: time="2025-10-27T08:19:52.292911712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.419118502s" Oct 27 08:19:52.292999 containerd[1611]: time="2025-10-27T08:19:52.292949202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 27 08:19:52.293877 containerd[1611]: time="2025-10-27T08:19:52.293855698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 08:19:52.310277 containerd[1611]: time="2025-10-27T08:19:52.310230616Z" level=info msg="CreateContainer within sandbox \"847f4f6b9b8009a457778aa488bd7ccfd956ddc58debbfb39176862e4a912e89\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 08:19:52.318268 containerd[1611]: time="2025-10-27T08:19:52.318235306Z" level=info msg="Container d9a575db4fcf919d974f5ac362373bf9317bce57b892917e8edf34a411411887: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:19:52.325999 containerd[1611]: time="2025-10-27T08:19:52.325944230Z" level=info msg="CreateContainer within sandbox \"847f4f6b9b8009a457778aa488bd7ccfd956ddc58debbfb39176862e4a912e89\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d9a575db4fcf919d974f5ac362373bf9317bce57b892917e8edf34a411411887\"" Oct 27 08:19:52.326458 containerd[1611]: time="2025-10-27T08:19:52.326434923Z" level=info msg="StartContainer for \"d9a575db4fcf919d974f5ac362373bf9317bce57b892917e8edf34a411411887\"" Oct 27 08:19:52.327455 containerd[1611]: time="2025-10-27T08:19:52.327430847Z" level=info msg="connecting to shim d9a575db4fcf919d974f5ac362373bf9317bce57b892917e8edf34a411411887" address="unix:///run/containerd/s/8e26ee275b3c42f9d105e40b77c058500feee72641a83dee18376f0b3f260ae0" protocol=ttrpc version=3 Oct 27 08:19:52.350711 systemd[1]: Started cri-containerd-d9a575db4fcf919d974f5ac362373bf9317bce57b892917e8edf34a411411887.scope - libcontainer container d9a575db4fcf919d974f5ac362373bf9317bce57b892917e8edf34a411411887. Oct 27 08:19:52.406420 containerd[1611]: time="2025-10-27T08:19:52.406307448Z" level=info msg="StartContainer for \"d9a575db4fcf919d974f5ac362373bf9317bce57b892917e8edf34a411411887\" returns successfully" Oct 27 08:19:53.078159 kubelet[2803]: E1027 08:19:53.078095 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:19:53.149944 kubelet[2803]: E1027 08:19:53.148847 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:53.159222 kubelet[2803]: I1027 08:19:53.159019 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6557cd4f66-5k6zq" podStartSLOduration=1.738769334 podStartE2EDuration="4.158999078s" podCreationTimestamp="2025-10-27 08:19:49 +0000 UTC" firstStartedPulling="2025-10-27 08:19:49.873499977 +0000 UTC m=+19.908134895" lastFinishedPulling="2025-10-27 08:19:52.293729711 +0000 UTC m=+22.328364639" observedRunningTime="2025-10-27 08:19:53.158825442 +0000 UTC m=+23.193460370" watchObservedRunningTime="2025-10-27 08:19:53.158999078 +0000 UTC m=+23.193633996" Oct 27 08:19:53.206304 kubelet[2803]: E1027 08:19:53.206259 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.206304 kubelet[2803]: W1027 08:19:53.206286 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.206304 kubelet[2803]: E1027 08:19:53.206310 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.206571 kubelet[2803]: E1027 08:19:53.206548 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.206571 kubelet[2803]: W1027 08:19:53.206557 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.206571 kubelet[2803]: E1027 08:19:53.206567 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.206807 kubelet[2803]: E1027 08:19:53.206772 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.206807 kubelet[2803]: W1027 08:19:53.206788 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.206807 kubelet[2803]: E1027 08:19:53.206801 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.207166 kubelet[2803]: E1027 08:19:53.207074 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.207166 kubelet[2803]: W1027 08:19:53.207086 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.207166 kubelet[2803]: E1027 08:19:53.207097 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.207340 kubelet[2803]: E1027 08:19:53.207293 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.207340 kubelet[2803]: W1027 08:19:53.207305 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.207340 kubelet[2803]: E1027 08:19:53.207315 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.207536 kubelet[2803]: E1027 08:19:53.207511 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.207583 kubelet[2803]: W1027 08:19:53.207556 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.207583 kubelet[2803]: E1027 08:19:53.207569 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.207775 kubelet[2803]: E1027 08:19:53.207755 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.207775 kubelet[2803]: W1027 08:19:53.207769 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.207826 kubelet[2803]: E1027 08:19:53.207779 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.207995 kubelet[2803]: E1027 08:19:53.207973 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.207995 kubelet[2803]: W1027 08:19:53.207988 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.208049 kubelet[2803]: E1027 08:19:53.207998 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.208205 kubelet[2803]: E1027 08:19:53.208184 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.208205 kubelet[2803]: W1027 08:19:53.208198 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.208251 kubelet[2803]: E1027 08:19:53.208209 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.208400 kubelet[2803]: E1027 08:19:53.208380 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.208400 kubelet[2803]: W1027 08:19:53.208393 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.208454 kubelet[2803]: E1027 08:19:53.208403 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.208621 kubelet[2803]: E1027 08:19:53.208600 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.208621 kubelet[2803]: W1027 08:19:53.208614 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.208704 kubelet[2803]: E1027 08:19:53.208625 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.208826 kubelet[2803]: E1027 08:19:53.208800 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.208826 kubelet[2803]: W1027 08:19:53.208814 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.208826 kubelet[2803]: E1027 08:19:53.208824 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.209044 kubelet[2803]: E1027 08:19:53.209021 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.209044 kubelet[2803]: W1027 08:19:53.209035 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.209097 kubelet[2803]: E1027 08:19:53.209045 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.209247 kubelet[2803]: E1027 08:19:53.209226 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.209247 kubelet[2803]: W1027 08:19:53.209240 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.209298 kubelet[2803]: E1027 08:19:53.209250 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.209455 kubelet[2803]: E1027 08:19:53.209434 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.209455 kubelet[2803]: W1027 08:19:53.209448 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.209509 kubelet[2803]: E1027 08:19:53.209458 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.215710 kubelet[2803]: E1027 08:19:53.215662 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.215710 kubelet[2803]: W1027 08:19:53.215675 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.215710 kubelet[2803]: E1027 08:19:53.215686 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.215904 kubelet[2803]: E1027 08:19:53.215869 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.215904 kubelet[2803]: W1027 08:19:53.215877 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.215904 kubelet[2803]: E1027 08:19:53.215886 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.216084 kubelet[2803]: E1027 08:19:53.216064 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.216084 kubelet[2803]: W1027 08:19:53.216075 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.216084 kubelet[2803]: E1027 08:19:53.216083 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.216289 kubelet[2803]: E1027 08:19:53.216262 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.216289 kubelet[2803]: W1027 08:19:53.216274 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.216289 kubelet[2803]: E1027 08:19:53.216283 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.216494 kubelet[2803]: E1027 08:19:53.216442 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.216494 kubelet[2803]: W1027 08:19:53.216453 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.216494 kubelet[2803]: E1027 08:19:53.216461 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.216678 kubelet[2803]: E1027 08:19:53.216670 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.216678 kubelet[2803]: W1027 08:19:53.216680 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.216753 kubelet[2803]: E1027 08:19:53.216689 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.216898 kubelet[2803]: E1027 08:19:53.216874 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.216898 kubelet[2803]: W1027 08:19:53.216885 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.216898 kubelet[2803]: E1027 08:19:53.216893 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.217321 kubelet[2803]: E1027 08:19:53.217290 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.217321 kubelet[2803]: W1027 08:19:53.217306 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.217321 kubelet[2803]: E1027 08:19:53.217320 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.217568 kubelet[2803]: E1027 08:19:53.217551 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.217568 kubelet[2803]: W1027 08:19:53.217563 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.217671 kubelet[2803]: E1027 08:19:53.217574 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.217823 kubelet[2803]: E1027 08:19:53.217792 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.217823 kubelet[2803]: W1027 08:19:53.217809 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.217916 kubelet[2803]: E1027 08:19:53.217827 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.218063 kubelet[2803]: E1027 08:19:53.218029 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.218063 kubelet[2803]: W1027 08:19:53.218041 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.218063 kubelet[2803]: E1027 08:19:53.218053 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.218273 kubelet[2803]: E1027 08:19:53.218256 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.218273 kubelet[2803]: W1027 08:19:53.218268 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.218358 kubelet[2803]: E1027 08:19:53.218278 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.218536 kubelet[2803]: E1027 08:19:53.218516 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.218536 kubelet[2803]: W1027 08:19:53.218529 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.218633 kubelet[2803]: E1027 08:19:53.218540 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.218986 kubelet[2803]: E1027 08:19:53.218950 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.218986 kubelet[2803]: W1027 08:19:53.218982 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.219065 kubelet[2803]: E1027 08:19:53.219011 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.219268 kubelet[2803]: E1027 08:19:53.219245 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.219268 kubelet[2803]: W1027 08:19:53.219262 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.219378 kubelet[2803]: E1027 08:19:53.219276 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.219597 kubelet[2803]: E1027 08:19:53.219572 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.219597 kubelet[2803]: W1027 08:19:53.219588 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.219597 kubelet[2803]: E1027 08:19:53.219600 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.219941 kubelet[2803]: E1027 08:19:53.219921 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.219941 kubelet[2803]: W1027 08:19:53.219936 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.220012 kubelet[2803]: E1027 08:19:53.219951 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.220167 kubelet[2803]: E1027 08:19:53.220148 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:19:53.220167 kubelet[2803]: W1027 08:19:53.220163 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:19:53.220216 kubelet[2803]: E1027 08:19:53.220174 2803 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:19:53.695263 containerd[1611]: time="2025-10-27T08:19:53.695169324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:53.696062 containerd[1611]: time="2025-10-27T08:19:53.696006388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 27 08:19:53.697412 containerd[1611]: time="2025-10-27T08:19:53.697372428Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:53.700974 containerd[1611]: time="2025-10-27T08:19:53.700360459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:53.701600 containerd[1611]: time="2025-10-27T08:19:53.701533165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.407646348s" Oct 27 08:19:53.701600 containerd[1611]: time="2025-10-27T08:19:53.701591074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 27 08:19:53.708694 containerd[1611]: time="2025-10-27T08:19:53.708628511Z" level=info msg="CreateContainer within sandbox \"b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 08:19:53.720733 containerd[1611]: time="2025-10-27T08:19:53.720675673Z" level=info msg="Container 592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:19:53.733557 containerd[1611]: time="2025-10-27T08:19:53.731871062Z" level=info msg="CreateContainer within sandbox \"b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079\"" Oct 27 08:19:53.734076 containerd[1611]: time="2025-10-27T08:19:53.734033920Z" level=info msg="StartContainer for \"592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079\"" Oct 27 08:19:53.736269 containerd[1611]: time="2025-10-27T08:19:53.736221345Z" level=info msg="connecting to shim 592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079" address="unix:///run/containerd/s/4678550512bd1c27598c2c8a948cdaebbfa3ba79b9f7b2e446456889a9a60cb9" protocol=ttrpc version=3 Oct 27 08:19:53.762677 systemd[1]: Started cri-containerd-592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079.scope - libcontainer container 592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079. Oct 27 08:19:53.816388 containerd[1611]: time="2025-10-27T08:19:53.816324654Z" level=info msg="StartContainer for \"592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079\" returns successfully" Oct 27 08:19:53.829453 systemd[1]: cri-containerd-592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079.scope: Deactivated successfully. Oct 27 08:19:53.832808 containerd[1611]: time="2025-10-27T08:19:53.832766342Z" level=info msg="received exit event container_id:\"592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079\" id:\"592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079\" pid:3518 exited_at:{seconds:1761553193 nanos:832298101}" Oct 27 08:19:53.832965 containerd[1611]: time="2025-10-27T08:19:53.832928948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079\" id:\"592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079\" pid:3518 exited_at:{seconds:1761553193 nanos:832298101}" Oct 27 08:19:53.866941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-592baa3115265b25f815bb5975b1a6be9a81e080c9e3c6ace95a8b20eb58d079-rootfs.mount: Deactivated successfully. Oct 27 08:19:54.188182 kubelet[2803]: E1027 08:19:54.188049 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:54.188182 kubelet[2803]: I1027 08:19:54.188152 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:19:54.190101 kubelet[2803]: E1027 08:19:54.188379 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:55.077826 kubelet[2803]: E1027 08:19:55.077771 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:19:55.192101 kubelet[2803]: E1027 08:19:55.192042 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:19:55.193569 containerd[1611]: time="2025-10-27T08:19:55.193028351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 08:19:57.078546 kubelet[2803]: E1027 08:19:57.078444 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:19:59.078828 kubelet[2803]: E1027 08:19:59.078724 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:19:59.485046 containerd[1611]: time="2025-10-27T08:19:59.484897988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:59.485724 containerd[1611]: time="2025-10-27T08:19:59.485691429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 27 08:19:59.486890 containerd[1611]: time="2025-10-27T08:19:59.486839036Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:59.488703 containerd[1611]: time="2025-10-27T08:19:59.488666440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:19:59.489253 containerd[1611]: time="2025-10-27T08:19:59.489213008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.296151033s" Oct 27 08:19:59.489253 containerd[1611]: time="2025-10-27T08:19:59.489242252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 27 08:19:59.493285 containerd[1611]: time="2025-10-27T08:19:59.493234705Z" level=info msg="CreateContainer within sandbox \"b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 08:19:59.503731 containerd[1611]: time="2025-10-27T08:19:59.503650718Z" level=info msg="Container 9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:19:59.514265 containerd[1611]: time="2025-10-27T08:19:59.514212106Z" level=info msg="CreateContainer within sandbox \"b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28\"" Oct 27 08:19:59.514834 containerd[1611]: time="2025-10-27T08:19:59.514812144Z" level=info msg="StartContainer for \"9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28\"" Oct 27 08:19:59.516354 containerd[1611]: time="2025-10-27T08:19:59.516331109Z" level=info msg="connecting to shim 9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28" address="unix:///run/containerd/s/4678550512bd1c27598c2c8a948cdaebbfa3ba79b9f7b2e446456889a9a60cb9" protocol=ttrpc version=3 Oct 27 08:19:59.542679 systemd[1]: Started cri-containerd-9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28.scope - libcontainer container 9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28. Oct 27 08:19:59.594119 containerd[1611]: time="2025-10-27T08:19:59.594060229Z" level=info msg="StartContainer for \"9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28\" returns successfully" Oct 27 08:20:00.203243 kubelet[2803]: E1027 08:20:00.203199 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:01.079024 kubelet[2803]: E1027 08:20:01.078936 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:20:01.205606 kubelet[2803]: E1027 08:20:01.205572 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:01.278536 systemd[1]: cri-containerd-9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28.scope: Deactivated successfully. Oct 27 08:20:01.278983 systemd[1]: cri-containerd-9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28.scope: Consumed 722ms CPU time, 176.2M memory peak, 4M read from disk, 171.3M written to disk. Oct 27 08:20:01.280984 containerd[1611]: time="2025-10-27T08:20:01.280918774Z" level=info msg="received exit event container_id:\"9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28\" id:\"9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28\" pid:3579 exited_at:{seconds:1761553201 nanos:280405409}" Oct 27 08:20:01.311335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28-rootfs.mount: Deactivated successfully. Oct 27 08:20:01.390580 containerd[1611]: time="2025-10-27T08:20:01.390357142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28\" id:\"9895b4f2aae60130565b1ac584a099a825dc225374e222a3d3cccf47fc910e28\" pid:3579 exited_at:{seconds:1761553201 nanos:280405409}" Oct 27 08:20:01.395146 kubelet[2803]: I1027 08:20:01.393617 2803 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 27 08:20:01.668524 systemd[1]: Created slice kubepods-burstable-pod69450d0b_ab93_42f2_8967_aaf9ddd99661.slice - libcontainer container kubepods-burstable-pod69450d0b_ab93_42f2_8967_aaf9ddd99661.slice. Oct 27 08:20:01.675607 kubelet[2803]: I1027 08:20:01.675559 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69450d0b-ab93-42f2-8967-aaf9ddd99661-config-volume\") pod \"coredns-674b8bbfcf-4rk84\" (UID: \"69450d0b-ab93-42f2-8967-aaf9ddd99661\") " pod="kube-system/coredns-674b8bbfcf-4rk84" Oct 27 08:20:01.675607 kubelet[2803]: I1027 08:20:01.675609 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7fg8\" (UniqueName: \"kubernetes.io/projected/69450d0b-ab93-42f2-8967-aaf9ddd99661-kube-api-access-c7fg8\") pod \"coredns-674b8bbfcf-4rk84\" (UID: \"69450d0b-ab93-42f2-8967-aaf9ddd99661\") " pod="kube-system/coredns-674b8bbfcf-4rk84" Oct 27 08:20:01.680384 systemd[1]: Created slice kubepods-besteffort-pod89b2c447_220a_4d0e_8d3d_30370d7bddf9.slice - libcontainer container kubepods-besteffort-pod89b2c447_220a_4d0e_8d3d_30370d7bddf9.slice. Oct 27 08:20:01.691049 systemd[1]: Created slice kubepods-besteffort-pod7f6d69dd_b2c4_4429_9e0c_5cd505f17f7e.slice - libcontainer container kubepods-besteffort-pod7f6d69dd_b2c4_4429_9e0c_5cd505f17f7e.slice. Oct 27 08:20:01.698907 systemd[1]: Created slice kubepods-besteffort-pod6dad6e5f_6112_429b_aab8_41593a07cb3d.slice - libcontainer container kubepods-besteffort-pod6dad6e5f_6112_429b_aab8_41593a07cb3d.slice. Oct 27 08:20:01.704485 systemd[1]: Created slice kubepods-besteffort-pod05a006b1_9824_4b9a_80e0_9406b9da0421.slice - libcontainer container kubepods-besteffort-pod05a006b1_9824_4b9a_80e0_9406b9da0421.slice. Oct 27 08:20:01.711805 systemd[1]: Created slice kubepods-burstable-pod6444bf74_ad4b_45b5_b1d7_45b95c454a19.slice - libcontainer container kubepods-burstable-pod6444bf74_ad4b_45b5_b1d7_45b95c454a19.slice. Oct 27 08:20:01.718376 systemd[1]: Created slice kubepods-besteffort-pod8832f7e4_0882_4808_9716_2c453d412432.slice - libcontainer container kubepods-besteffort-pod8832f7e4_0882_4808_9716_2c453d412432.slice. Oct 27 08:20:01.776774 kubelet[2803]: I1027 08:20:01.776696 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4cmd\" (UniqueName: \"kubernetes.io/projected/6444bf74-ad4b-45b5-b1d7-45b95c454a19-kube-api-access-g4cmd\") pod \"coredns-674b8bbfcf-wg2gg\" (UID: \"6444bf74-ad4b-45b5-b1d7-45b95c454a19\") " pod="kube-system/coredns-674b8bbfcf-wg2gg" Oct 27 08:20:01.776774 kubelet[2803]: I1027 08:20:01.776764 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8832f7e4-0882-4808-9716-2c453d412432-goldmane-ca-bundle\") pod \"goldmane-666569f655-q42qv\" (UID: \"8832f7e4-0882-4808-9716-2c453d412432\") " pod="calico-system/goldmane-666569f655-q42qv" Oct 27 08:20:01.776774 kubelet[2803]: I1027 08:20:01.776795 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dctvv\" (UniqueName: \"kubernetes.io/projected/89b2c447-220a-4d0e-8d3d-30370d7bddf9-kube-api-access-dctvv\") pod \"calico-kube-controllers-76bd5dfdc6-jcthj\" (UID: \"89b2c447-220a-4d0e-8d3d-30370d7bddf9\") " pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" Oct 27 08:20:01.777060 kubelet[2803]: I1027 08:20:01.776814 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n255\" (UniqueName: \"kubernetes.io/projected/05a006b1-9824-4b9a-80e0-9406b9da0421-kube-api-access-5n255\") pod \"whisker-6d9f8684bf-784vq\" (UID: \"05a006b1-9824-4b9a-80e0-9406b9da0421\") " pod="calico-system/whisker-6d9f8684bf-784vq" Oct 27 08:20:01.777060 kubelet[2803]: I1027 08:20:01.776871 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8832f7e4-0882-4808-9716-2c453d412432-config\") pod \"goldmane-666569f655-q42qv\" (UID: \"8832f7e4-0882-4808-9716-2c453d412432\") " pod="calico-system/goldmane-666569f655-q42qv" Oct 27 08:20:01.777060 kubelet[2803]: I1027 08:20:01.776920 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8832f7e4-0882-4808-9716-2c453d412432-goldmane-key-pair\") pod \"goldmane-666569f655-q42qv\" (UID: \"8832f7e4-0882-4808-9716-2c453d412432\") " pod="calico-system/goldmane-666569f655-q42qv" Oct 27 08:20:01.777060 kubelet[2803]: I1027 08:20:01.776952 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6dad6e5f-6112-429b-aab8-41593a07cb3d-calico-apiserver-certs\") pod \"calico-apiserver-57d5854b59-mtcd9\" (UID: \"6dad6e5f-6112-429b-aab8-41593a07cb3d\") " pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" Oct 27 08:20:01.777060 kubelet[2803]: I1027 08:20:01.776972 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hmwp\" (UniqueName: \"kubernetes.io/projected/6dad6e5f-6112-429b-aab8-41593a07cb3d-kube-api-access-8hmwp\") pod \"calico-apiserver-57d5854b59-mtcd9\" (UID: \"6dad6e5f-6112-429b-aab8-41593a07cb3d\") " pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" Oct 27 08:20:01.777178 kubelet[2803]: I1027 08:20:01.776988 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b2c447-220a-4d0e-8d3d-30370d7bddf9-tigera-ca-bundle\") pod \"calico-kube-controllers-76bd5dfdc6-jcthj\" (UID: \"89b2c447-220a-4d0e-8d3d-30370d7bddf9\") " pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" Oct 27 08:20:01.777178 kubelet[2803]: I1027 08:20:01.777008 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6444bf74-ad4b-45b5-b1d7-45b95c454a19-config-volume\") pod \"coredns-674b8bbfcf-wg2gg\" (UID: \"6444bf74-ad4b-45b5-b1d7-45b95c454a19\") " pod="kube-system/coredns-674b8bbfcf-wg2gg" Oct 27 08:20:01.777178 kubelet[2803]: I1027 08:20:01.777045 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45kzj\" (UniqueName: \"kubernetes.io/projected/7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e-kube-api-access-45kzj\") pod \"calico-apiserver-57d5854b59-xhcvf\" (UID: \"7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e\") " pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" Oct 27 08:20:01.777178 kubelet[2803]: I1027 08:20:01.777066 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05a006b1-9824-4b9a-80e0-9406b9da0421-whisker-backend-key-pair\") pod \"whisker-6d9f8684bf-784vq\" (UID: \"05a006b1-9824-4b9a-80e0-9406b9da0421\") " pod="calico-system/whisker-6d9f8684bf-784vq" Oct 27 08:20:01.777178 kubelet[2803]: I1027 08:20:01.777092 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gbrr\" (UniqueName: \"kubernetes.io/projected/8832f7e4-0882-4808-9716-2c453d412432-kube-api-access-7gbrr\") pod \"goldmane-666569f655-q42qv\" (UID: \"8832f7e4-0882-4808-9716-2c453d412432\") " pod="calico-system/goldmane-666569f655-q42qv" Oct 27 08:20:01.777295 kubelet[2803]: I1027 08:20:01.777130 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e-calico-apiserver-certs\") pod \"calico-apiserver-57d5854b59-xhcvf\" (UID: \"7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e\") " pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" Oct 27 08:20:01.777295 kubelet[2803]: I1027 08:20:01.777171 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05a006b1-9824-4b9a-80e0-9406b9da0421-whisker-ca-bundle\") pod \"whisker-6d9f8684bf-784vq\" (UID: \"05a006b1-9824-4b9a-80e0-9406b9da0421\") " pod="calico-system/whisker-6d9f8684bf-784vq" Oct 27 08:20:01.974563 kubelet[2803]: E1027 08:20:01.974375 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:01.975362 containerd[1611]: time="2025-10-27T08:20:01.975058204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rk84,Uid:69450d0b-ab93-42f2-8967-aaf9ddd99661,Namespace:kube-system,Attempt:0,}" Oct 27 08:20:01.987023 containerd[1611]: time="2025-10-27T08:20:01.986975383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76bd5dfdc6-jcthj,Uid:89b2c447-220a-4d0e-8d3d-30370d7bddf9,Namespace:calico-system,Attempt:0,}" Oct 27 08:20:02.000967 containerd[1611]: time="2025-10-27T08:20:02.000927266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5854b59-xhcvf,Uid:7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:20:02.003712 containerd[1611]: time="2025-10-27T08:20:02.003667512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5854b59-mtcd9,Uid:6dad6e5f-6112-429b-aab8-41593a07cb3d,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:20:02.010761 containerd[1611]: time="2025-10-27T08:20:02.010727465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d9f8684bf-784vq,Uid:05a006b1-9824-4b9a-80e0-9406b9da0421,Namespace:calico-system,Attempt:0,}" Oct 27 08:20:02.016432 kubelet[2803]: E1027 08:20:02.015528 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:02.028571 containerd[1611]: time="2025-10-27T08:20:02.027867704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q42qv,Uid:8832f7e4-0882-4808-9716-2c453d412432,Namespace:calico-system,Attempt:0,}" Oct 27 08:20:02.032885 containerd[1611]: time="2025-10-27T08:20:02.032761868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wg2gg,Uid:6444bf74-ad4b-45b5-b1d7-45b95c454a19,Namespace:kube-system,Attempt:0,}" Oct 27 08:20:02.125899 containerd[1611]: time="2025-10-27T08:20:02.125845210Z" level=error msg="Failed to destroy network for sandbox \"b8bd6bd53947b4fe9418e41707c7d12a81b6bf0f36100d656078d45ca8387387\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.128802 containerd[1611]: time="2025-10-27T08:20:02.128654617Z" level=error msg="Failed to destroy network for sandbox \"579e8edce7fc94383cc8fcebafdf616e1184b1648724037d4ea3b83972574007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.164133 containerd[1611]: time="2025-10-27T08:20:02.164074745Z" level=error msg="Failed to destroy network for sandbox \"47072718b503ea643d75d3bcd077e9fd4908280ad756d91cd656b273b9d784f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.183760 containerd[1611]: time="2025-10-27T08:20:02.183670747Z" level=error msg="Failed to destroy network for sandbox \"8bf6d03589c35da2962c1f21760b94aa94eb8f59709ff201786a4f0505102403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.187913 containerd[1611]: time="2025-10-27T08:20:02.187793581Z" level=error msg="Failed to destroy network for sandbox \"f121f2f4a72e7e8d601f870cca9938a9676250346008fc94b8fabee334282bd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.190814 containerd[1611]: time="2025-10-27T08:20:02.190778449Z" level=error msg="Failed to destroy network for sandbox \"b8fd4623ab9a41ca157546b92e251afc1d329758550d0887b8bf81d84ca6e8c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.193086 containerd[1611]: time="2025-10-27T08:20:02.193062059Z" level=error msg="Failed to destroy network for sandbox \"a3aee24f56443a18f6809add3b7e61a54ef7d496b2e7ee64bdc51ff5c8c6e9b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.209805 kubelet[2803]: E1027 08:20:02.209774 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:02.210830 containerd[1611]: time="2025-10-27T08:20:02.210786435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 08:20:02.220570 containerd[1611]: time="2025-10-27T08:20:02.220499351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5854b59-xhcvf,Uid:7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bd6bd53947b4fe9418e41707c7d12a81b6bf0f36100d656078d45ca8387387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.220776 kubelet[2803]: E1027 08:20:02.220748 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bd6bd53947b4fe9418e41707c7d12a81b6bf0f36100d656078d45ca8387387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.220842 kubelet[2803]: E1027 08:20:02.220809 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bd6bd53947b4fe9418e41707c7d12a81b6bf0f36100d656078d45ca8387387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" Oct 27 08:20:02.220842 kubelet[2803]: E1027 08:20:02.220830 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8bd6bd53947b4fe9418e41707c7d12a81b6bf0f36100d656078d45ca8387387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" Oct 27 08:20:02.220902 kubelet[2803]: E1027 08:20:02.220881 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d5854b59-xhcvf_calico-apiserver(7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d5854b59-xhcvf_calico-apiserver(7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8bd6bd53947b4fe9418e41707c7d12a81b6bf0f36100d656078d45ca8387387\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" podUID="7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e" Oct 27 08:20:02.267920 containerd[1611]: time="2025-10-27T08:20:02.267289077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rk84,Uid:69450d0b-ab93-42f2-8967-aaf9ddd99661,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"579e8edce7fc94383cc8fcebafdf616e1184b1648724037d4ea3b83972574007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.268014 kubelet[2803]: E1027 08:20:02.267425 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579e8edce7fc94383cc8fcebafdf616e1184b1648724037d4ea3b83972574007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.268014 kubelet[2803]: E1027 08:20:02.267457 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579e8edce7fc94383cc8fcebafdf616e1184b1648724037d4ea3b83972574007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4rk84" Oct 27 08:20:02.268014 kubelet[2803]: E1027 08:20:02.267507 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"579e8edce7fc94383cc8fcebafdf616e1184b1648724037d4ea3b83972574007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4rk84" Oct 27 08:20:02.268122 kubelet[2803]: E1027 08:20:02.267567 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4rk84_kube-system(69450d0b-ab93-42f2-8967-aaf9ddd99661)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4rk84_kube-system(69450d0b-ab93-42f2-8967-aaf9ddd99661)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"579e8edce7fc94383cc8fcebafdf616e1184b1648724037d4ea3b83972574007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4rk84" podUID="69450d0b-ab93-42f2-8967-aaf9ddd99661" Oct 27 08:20:02.383079 containerd[1611]: time="2025-10-27T08:20:02.383000156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76bd5dfdc6-jcthj,Uid:89b2c447-220a-4d0e-8d3d-30370d7bddf9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f121f2f4a72e7e8d601f870cca9938a9676250346008fc94b8fabee334282bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.396301 containerd[1611]: time="2025-10-27T08:20:02.396239437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5854b59-mtcd9,Uid:6dad6e5f-6112-429b-aab8-41593a07cb3d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47072718b503ea643d75d3bcd077e9fd4908280ad756d91cd656b273b9d784f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.396501 kubelet[2803]: E1027 08:20:02.396434 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47072718b503ea643d75d3bcd077e9fd4908280ad756d91cd656b273b9d784f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.396552 kubelet[2803]: E1027 08:20:02.396507 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47072718b503ea643d75d3bcd077e9fd4908280ad756d91cd656b273b9d784f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" Oct 27 08:20:02.396552 kubelet[2803]: E1027 08:20:02.396536 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47072718b503ea643d75d3bcd077e9fd4908280ad756d91cd656b273b9d784f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" Oct 27 08:20:02.396609 kubelet[2803]: E1027 08:20:02.396587 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d5854b59-mtcd9_calico-apiserver(6dad6e5f-6112-429b-aab8-41593a07cb3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d5854b59-mtcd9_calico-apiserver(6dad6e5f-6112-429b-aab8-41593a07cb3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47072718b503ea643d75d3bcd077e9fd4908280ad756d91cd656b273b9d784f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" podUID="6dad6e5f-6112-429b-aab8-41593a07cb3d" Oct 27 08:20:02.396811 kubelet[2803]: E1027 08:20:02.396782 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f121f2f4a72e7e8d601f870cca9938a9676250346008fc94b8fabee334282bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.396851 kubelet[2803]: E1027 08:20:02.396815 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f121f2f4a72e7e8d601f870cca9938a9676250346008fc94b8fabee334282bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" Oct 27 08:20:02.396851 kubelet[2803]: E1027 08:20:02.396839 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f121f2f4a72e7e8d601f870cca9938a9676250346008fc94b8fabee334282bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" Oct 27 08:20:02.396921 kubelet[2803]: E1027 08:20:02.396875 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76bd5dfdc6-jcthj_calico-system(89b2c447-220a-4d0e-8d3d-30370d7bddf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76bd5dfdc6-jcthj_calico-system(89b2c447-220a-4d0e-8d3d-30370d7bddf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f121f2f4a72e7e8d601f870cca9938a9676250346008fc94b8fabee334282bd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" podUID="89b2c447-220a-4d0e-8d3d-30370d7bddf9" Oct 27 08:20:02.397419 containerd[1611]: time="2025-10-27T08:20:02.397378086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q42qv,Uid:8832f7e4-0882-4808-9716-2c453d412432,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf6d03589c35da2962c1f21760b94aa94eb8f59709ff201786a4f0505102403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.397560 kubelet[2803]: E1027 08:20:02.397524 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf6d03589c35da2962c1f21760b94aa94eb8f59709ff201786a4f0505102403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.397623 kubelet[2803]: E1027 08:20:02.397565 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf6d03589c35da2962c1f21760b94aa94eb8f59709ff201786a4f0505102403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-q42qv" Oct 27 08:20:02.397623 kubelet[2803]: E1027 08:20:02.397580 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf6d03589c35da2962c1f21760b94aa94eb8f59709ff201786a4f0505102403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-q42qv" Oct 27 08:20:02.397623 kubelet[2803]: E1027 08:20:02.397613 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-q42qv_calico-system(8832f7e4-0882-4808-9716-2c453d412432)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-q42qv_calico-system(8832f7e4-0882-4808-9716-2c453d412432)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bf6d03589c35da2962c1f21760b94aa94eb8f59709ff201786a4f0505102403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-q42qv" podUID="8832f7e4-0882-4808-9716-2c453d412432" Oct 27 08:20:02.398434 containerd[1611]: time="2025-10-27T08:20:02.398358287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d9f8684bf-784vq,Uid:05a006b1-9824-4b9a-80e0-9406b9da0421,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8fd4623ab9a41ca157546b92e251afc1d329758550d0887b8bf81d84ca6e8c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.398591 kubelet[2803]: E1027 08:20:02.398528 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8fd4623ab9a41ca157546b92e251afc1d329758550d0887b8bf81d84ca6e8c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.398591 kubelet[2803]: E1027 08:20:02.398586 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8fd4623ab9a41ca157546b92e251afc1d329758550d0887b8bf81d84ca6e8c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d9f8684bf-784vq" Oct 27 08:20:02.398699 kubelet[2803]: E1027 08:20:02.398608 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8fd4623ab9a41ca157546b92e251afc1d329758550d0887b8bf81d84ca6e8c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d9f8684bf-784vq" Oct 27 08:20:02.398699 kubelet[2803]: E1027 08:20:02.398648 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d9f8684bf-784vq_calico-system(05a006b1-9824-4b9a-80e0-9406b9da0421)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d9f8684bf-784vq_calico-system(05a006b1-9824-4b9a-80e0-9406b9da0421)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8fd4623ab9a41ca157546b92e251afc1d329758550d0887b8bf81d84ca6e8c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d9f8684bf-784vq" podUID="05a006b1-9824-4b9a-80e0-9406b9da0421" Oct 27 08:20:02.399407 containerd[1611]: time="2025-10-27T08:20:02.399372332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wg2gg,Uid:6444bf74-ad4b-45b5-b1d7-45b95c454a19,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3aee24f56443a18f6809add3b7e61a54ef7d496b2e7ee64bdc51ff5c8c6e9b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.399573 kubelet[2803]: E1027 08:20:02.399516 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3aee24f56443a18f6809add3b7e61a54ef7d496b2e7ee64bdc51ff5c8c6e9b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:02.399622 kubelet[2803]: E1027 08:20:02.399574 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3aee24f56443a18f6809add3b7e61a54ef7d496b2e7ee64bdc51ff5c8c6e9b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wg2gg" Oct 27 08:20:02.399622 kubelet[2803]: E1027 08:20:02.399589 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3aee24f56443a18f6809add3b7e61a54ef7d496b2e7ee64bdc51ff5c8c6e9b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wg2gg" Oct 27 08:20:02.399681 kubelet[2803]: E1027 08:20:02.399629 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wg2gg_kube-system(6444bf74-ad4b-45b5-b1d7-45b95c454a19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wg2gg_kube-system(6444bf74-ad4b-45b5-b1d7-45b95c454a19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3aee24f56443a18f6809add3b7e61a54ef7d496b2e7ee64bdc51ff5c8c6e9b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wg2gg" podUID="6444bf74-ad4b-45b5-b1d7-45b95c454a19" Oct 27 08:20:03.086236 systemd[1]: Created slice kubepods-besteffort-pod35ccb1c2_1c56_4133_a090_83b933f5454f.slice - libcontainer container kubepods-besteffort-pod35ccb1c2_1c56_4133_a090_83b933f5454f.slice. Oct 27 08:20:03.089136 containerd[1611]: time="2025-10-27T08:20:03.089092766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2dvr,Uid:35ccb1c2-1c56-4133-a090-83b933f5454f,Namespace:calico-system,Attempt:0,}" Oct 27 08:20:03.144345 containerd[1611]: time="2025-10-27T08:20:03.144282531Z" level=error msg="Failed to destroy network for sandbox \"ee2d62bb407d979d4c019d2d6d4ac59bbe2158b21aac442aa52f08d86bbf1c50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:03.146118 containerd[1611]: time="2025-10-27T08:20:03.145981803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2dvr,Uid:35ccb1c2-1c56-4133-a090-83b933f5454f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2d62bb407d979d4c019d2d6d4ac59bbe2158b21aac442aa52f08d86bbf1c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:03.146429 kubelet[2803]: E1027 08:20:03.146352 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2d62bb407d979d4c019d2d6d4ac59bbe2158b21aac442aa52f08d86bbf1c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:20:03.146621 kubelet[2803]: E1027 08:20:03.146440 2803 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2d62bb407d979d4c019d2d6d4ac59bbe2158b21aac442aa52f08d86bbf1c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2dvr" Oct 27 08:20:03.146621 kubelet[2803]: E1027 08:20:03.146489 2803 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee2d62bb407d979d4c019d2d6d4ac59bbe2158b21aac442aa52f08d86bbf1c50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2dvr" Oct 27 08:20:03.146621 kubelet[2803]: E1027 08:20:03.146550 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r2dvr_calico-system(35ccb1c2-1c56-4133-a090-83b933f5454f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r2dvr_calico-system(35ccb1c2-1c56-4133-a090-83b933f5454f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee2d62bb407d979d4c019d2d6d4ac59bbe2158b21aac442aa52f08d86bbf1c50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:20:03.147166 systemd[1]: run-netns-cni\x2d110a0aff\x2dc733\x2dd823\x2d1878\x2da2b7ed427aba.mount: Deactivated successfully. Oct 27 08:20:08.310098 kubelet[2803]: I1027 08:20:08.310052 2803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:20:08.311217 kubelet[2803]: E1027 08:20:08.310792 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:09.225190 kubelet[2803]: E1027 08:20:09.225150 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:10.000864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586054478.mount: Deactivated successfully. Oct 27 08:20:11.151441 containerd[1611]: time="2025-10-27T08:20:11.151369890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:20:11.152250 containerd[1611]: time="2025-10-27T08:20:11.152215587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 27 08:20:11.153463 containerd[1611]: time="2025-10-27T08:20:11.153414758Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:20:11.172911 containerd[1611]: time="2025-10-27T08:20:11.172839070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:20:11.173384 containerd[1611]: time="2025-10-27T08:20:11.173336554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.96251329s" Oct 27 08:20:11.173384 containerd[1611]: time="2025-10-27T08:20:11.173371339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 27 08:20:11.194923 containerd[1611]: time="2025-10-27T08:20:11.194871187Z" level=info msg="CreateContainer within sandbox \"b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 08:20:11.210010 containerd[1611]: time="2025-10-27T08:20:11.209959340Z" level=info msg="Container 7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:20:11.214409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427128231.mount: Deactivated successfully. Oct 27 08:20:11.222576 containerd[1611]: time="2025-10-27T08:20:11.222526922Z" level=info msg="CreateContainer within sandbox \"b860a322eff57c82ecf6e235eeaa748f0427edc354159e05350a1c4357b56122\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be\"" Oct 27 08:20:11.223144 containerd[1611]: time="2025-10-27T08:20:11.223118632Z" level=info msg="StartContainer for \"7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be\"" Oct 27 08:20:11.224892 containerd[1611]: time="2025-10-27T08:20:11.224863027Z" level=info msg="connecting to shim 7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be" address="unix:///run/containerd/s/4678550512bd1c27598c2c8a948cdaebbfa3ba79b9f7b2e446456889a9a60cb9" protocol=ttrpc version=3 Oct 27 08:20:11.257606 systemd[1]: Started cri-containerd-7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be.scope - libcontainer container 7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be. Oct 27 08:20:11.303638 containerd[1611]: time="2025-10-27T08:20:11.303585971Z" level=info msg="StartContainer for \"7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be\" returns successfully" Oct 27 08:20:11.382164 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 08:20:11.383158 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 08:20:11.543697 kubelet[2803]: I1027 08:20:11.543644 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n255\" (UniqueName: \"kubernetes.io/projected/05a006b1-9824-4b9a-80e0-9406b9da0421-kube-api-access-5n255\") pod \"05a006b1-9824-4b9a-80e0-9406b9da0421\" (UID: \"05a006b1-9824-4b9a-80e0-9406b9da0421\") " Oct 27 08:20:11.543697 kubelet[2803]: I1027 08:20:11.543690 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05a006b1-9824-4b9a-80e0-9406b9da0421-whisker-backend-key-pair\") pod \"05a006b1-9824-4b9a-80e0-9406b9da0421\" (UID: \"05a006b1-9824-4b9a-80e0-9406b9da0421\") " Oct 27 08:20:11.543697 kubelet[2803]: I1027 08:20:11.543708 2803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05a006b1-9824-4b9a-80e0-9406b9da0421-whisker-ca-bundle\") pod \"05a006b1-9824-4b9a-80e0-9406b9da0421\" (UID: \"05a006b1-9824-4b9a-80e0-9406b9da0421\") " Oct 27 08:20:11.544205 kubelet[2803]: I1027 08:20:11.544182 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a006b1-9824-4b9a-80e0-9406b9da0421-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "05a006b1-9824-4b9a-80e0-9406b9da0421" (UID: "05a006b1-9824-4b9a-80e0-9406b9da0421"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 08:20:11.549099 kubelet[2803]: I1027 08:20:11.549062 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a006b1-9824-4b9a-80e0-9406b9da0421-kube-api-access-5n255" (OuterVolumeSpecName: "kube-api-access-5n255") pod "05a006b1-9824-4b9a-80e0-9406b9da0421" (UID: "05a006b1-9824-4b9a-80e0-9406b9da0421"). InnerVolumeSpecName "kube-api-access-5n255". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 08:20:11.549364 kubelet[2803]: I1027 08:20:11.549337 2803 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05a006b1-9824-4b9a-80e0-9406b9da0421-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "05a006b1-9824-4b9a-80e0-9406b9da0421" (UID: "05a006b1-9824-4b9a-80e0-9406b9da0421"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 08:20:11.644211 kubelet[2803]: I1027 08:20:11.644165 2803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5n255\" (UniqueName: \"kubernetes.io/projected/05a006b1-9824-4b9a-80e0-9406b9da0421-kube-api-access-5n255\") on node \"localhost\" DevicePath \"\"" Oct 27 08:20:11.644211 kubelet[2803]: I1027 08:20:11.644198 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05a006b1-9824-4b9a-80e0-9406b9da0421-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 27 08:20:11.644211 kubelet[2803]: I1027 08:20:11.644207 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05a006b1-9824-4b9a-80e0-9406b9da0421-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 27 08:20:12.086515 systemd[1]: Removed slice kubepods-besteffort-pod05a006b1_9824_4b9a_80e0_9406b9da0421.slice - libcontainer container kubepods-besteffort-pod05a006b1_9824_4b9a_80e0_9406b9da0421.slice. Oct 27 08:20:12.179988 systemd[1]: var-lib-kubelet-pods-05a006b1\x2d9824\x2d4b9a\x2d80e0\x2d9406b9da0421-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5n255.mount: Deactivated successfully. Oct 27 08:20:12.180130 systemd[1]: var-lib-kubelet-pods-05a006b1\x2d9824\x2d4b9a\x2d80e0\x2d9406b9da0421-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 08:20:12.240001 kubelet[2803]: E1027 08:20:12.239929 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:12.343948 containerd[1611]: time="2025-10-27T08:20:12.343800241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be\" id:\"bb34f8257d9e290918ffcb877bcb3a7f8e4f33391327eb20e2cd81e59198ccb7\" pid:3973 exit_status:1 exited_at:{seconds:1761553212 nanos:343365574}" Oct 27 08:20:12.365027 kubelet[2803]: I1027 08:20:12.364793 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v97cq" podStartSLOduration=2.232586004 podStartE2EDuration="23.364775832s" podCreationTimestamp="2025-10-27 08:19:49 +0000 UTC" firstStartedPulling="2025-10-27 08:19:50.04183057 +0000 UTC m=+20.076465498" lastFinishedPulling="2025-10-27 08:20:11.174020398 +0000 UTC m=+41.208655326" observedRunningTime="2025-10-27 08:20:12.364482552 +0000 UTC m=+42.399117490" watchObservedRunningTime="2025-10-27 08:20:12.364775832 +0000 UTC m=+42.399410760" Oct 27 08:20:12.419660 systemd[1]: Created slice kubepods-besteffort-pod0b6943bc_5290_49e7_ad2e_2226e6164e9a.slice - libcontainer container kubepods-besteffort-pod0b6943bc_5290_49e7_ad2e_2226e6164e9a.slice. Oct 27 08:20:12.448812 kubelet[2803]: I1027 08:20:12.448753 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b6943bc-5290-49e7-ad2e-2226e6164e9a-whisker-backend-key-pair\") pod \"whisker-94557dddf-cnjrr\" (UID: \"0b6943bc-5290-49e7-ad2e-2226e6164e9a\") " pod="calico-system/whisker-94557dddf-cnjrr" Oct 27 08:20:12.448812 kubelet[2803]: I1027 08:20:12.448812 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b6943bc-5290-49e7-ad2e-2226e6164e9a-whisker-ca-bundle\") pod \"whisker-94557dddf-cnjrr\" (UID: \"0b6943bc-5290-49e7-ad2e-2226e6164e9a\") " pod="calico-system/whisker-94557dddf-cnjrr" Oct 27 08:20:12.449042 kubelet[2803]: I1027 08:20:12.448837 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrhnq\" (UniqueName: \"kubernetes.io/projected/0b6943bc-5290-49e7-ad2e-2226e6164e9a-kube-api-access-rrhnq\") pod \"whisker-94557dddf-cnjrr\" (UID: \"0b6943bc-5290-49e7-ad2e-2226e6164e9a\") " pod="calico-system/whisker-94557dddf-cnjrr" Oct 27 08:20:12.724598 containerd[1611]: time="2025-10-27T08:20:12.724259273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-94557dddf-cnjrr,Uid:0b6943bc-5290-49e7-ad2e-2226e6164e9a,Namespace:calico-system,Attempt:0,}" Oct 27 08:20:13.164859 systemd-networkd[1513]: vxlan.calico: Link UP Oct 27 08:20:13.164868 systemd-networkd[1513]: vxlan.calico: Gained carrier Oct 27 08:20:13.223705 systemd-networkd[1513]: cali1b2b7331b60: Link UP Oct 27 08:20:13.224813 systemd-networkd[1513]: cali1b2b7331b60: Gained carrier Oct 27 08:20:13.242336 kubelet[2803]: E1027 08:20:13.242240 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:13.246837 containerd[1611]: 2025-10-27 08:20:13.087 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--94557dddf--cnjrr-eth0 whisker-94557dddf- calico-system 0b6943bc-5290-49e7-ad2e-2226e6164e9a 933 0 2025-10-27 08:20:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:94557dddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-94557dddf-cnjrr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1b2b7331b60 [] [] }} ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Namespace="calico-system" Pod="whisker-94557dddf-cnjrr" WorkloadEndpoint="localhost-k8s-whisker--94557dddf--cnjrr-" Oct 27 08:20:13.246837 containerd[1611]: 2025-10-27 08:20:13.087 [INFO][4118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Namespace="calico-system" Pod="whisker-94557dddf-cnjrr" WorkloadEndpoint="localhost-k8s-whisker--94557dddf--cnjrr-eth0" Oct 27 08:20:13.246837 containerd[1611]: 2025-10-27 08:20:13.160 [INFO][4135] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" HandleID="k8s-pod-network.5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Workload="localhost-k8s-whisker--94557dddf--cnjrr-eth0" Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.164 [INFO][4135] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" HandleID="k8s-pod-network.5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Workload="localhost-k8s-whisker--94557dddf--cnjrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b7390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-94557dddf-cnjrr", "timestamp":"2025-10-27 08:20:13.160776232 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.164 [INFO][4135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.164 [INFO][4135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.164 [INFO][4135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.178 [INFO][4135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" host="localhost" Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.186 [INFO][4135] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.191 [INFO][4135] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.192 [INFO][4135] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.198 [INFO][4135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:13.247074 containerd[1611]: 2025-10-27 08:20:13.198 [INFO][4135] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" host="localhost" Oct 27 08:20:13.247565 containerd[1611]: 2025-10-27 08:20:13.203 [INFO][4135] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a Oct 27 08:20:13.247565 containerd[1611]: 2025-10-27 08:20:13.209 [INFO][4135] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" host="localhost" Oct 27 08:20:13.247565 containerd[1611]: 2025-10-27 08:20:13.215 [INFO][4135] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" host="localhost" Oct 27 08:20:13.247565 containerd[1611]: 2025-10-27 08:20:13.215 [INFO][4135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" host="localhost" Oct 27 08:20:13.247565 containerd[1611]: 2025-10-27 08:20:13.215 [INFO][4135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:20:13.247565 containerd[1611]: 2025-10-27 08:20:13.215 [INFO][4135] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" HandleID="k8s-pod-network.5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Workload="localhost-k8s-whisker--94557dddf--cnjrr-eth0" Oct 27 08:20:13.247695 containerd[1611]: 2025-10-27 08:20:13.219 [INFO][4118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Namespace="calico-system" Pod="whisker-94557dddf-cnjrr" WorkloadEndpoint="localhost-k8s-whisker--94557dddf--cnjrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--94557dddf--cnjrr-eth0", GenerateName:"whisker-94557dddf-", Namespace:"calico-system", SelfLink:"", UID:"0b6943bc-5290-49e7-ad2e-2226e6164e9a", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"94557dddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-94557dddf-cnjrr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b2b7331b60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:13.247695 containerd[1611]: 2025-10-27 08:20:13.219 [INFO][4118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Namespace="calico-system" Pod="whisker-94557dddf-cnjrr" WorkloadEndpoint="localhost-k8s-whisker--94557dddf--cnjrr-eth0" Oct 27 08:20:13.247773 containerd[1611]: 2025-10-27 08:20:13.219 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b2b7331b60 ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Namespace="calico-system" Pod="whisker-94557dddf-cnjrr" WorkloadEndpoint="localhost-k8s-whisker--94557dddf--cnjrr-eth0" Oct 27 08:20:13.247773 containerd[1611]: 2025-10-27 08:20:13.225 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Namespace="calico-system" Pod="whisker-94557dddf-cnjrr" WorkloadEndpoint="localhost-k8s-whisker--94557dddf--cnjrr-eth0" Oct 27 08:20:13.248272 containerd[1611]: 2025-10-27 08:20:13.226 [INFO][4118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Namespace="calico-system" Pod="whisker-94557dddf-cnjrr" WorkloadEndpoint="localhost-k8s-whisker--94557dddf--cnjrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--94557dddf--cnjrr-eth0", GenerateName:"whisker-94557dddf-", Namespace:"calico-system", SelfLink:"", UID:"0b6943bc-5290-49e7-ad2e-2226e6164e9a", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"94557dddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a", Pod:"whisker-94557dddf-cnjrr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b2b7331b60", MAC:"22:7f:c2:8d:15:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:13.248341 containerd[1611]: 2025-10-27 08:20:13.240 [INFO][4118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" Namespace="calico-system" Pod="whisker-94557dddf-cnjrr" WorkloadEndpoint="localhost-k8s-whisker--94557dddf--cnjrr-eth0" Oct 27 08:20:13.333855 containerd[1611]: time="2025-10-27T08:20:13.333785466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be\" id:\"30a622b383c905d8c23b7f7088aedb7567c452de9f3a849a5d9fb38d66bd073c\" pid:4200 exit_status:1 exited_at:{seconds:1761553213 nanos:333389894}" Oct 27 08:20:13.414909 containerd[1611]: time="2025-10-27T08:20:13.414814311Z" level=info msg="connecting to shim 5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a" address="unix:///run/containerd/s/a3d0a3819382f97e77839a15e162b1c36dcc0dc356bc8831d7a2610a0cc202ca" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:20:13.446658 systemd[1]: Started cri-containerd-5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a.scope - libcontainer container 5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a. Oct 27 08:20:13.467268 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:20:13.511760 containerd[1611]: time="2025-10-27T08:20:13.511671656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-94557dddf-cnjrr,Uid:0b6943bc-5290-49e7-ad2e-2226e6164e9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e00d05c09ecaa7e47528b0a7cca6432e313212d365aaded881682e371e52a3a\"" Oct 27 08:20:13.513836 containerd[1611]: time="2025-10-27T08:20:13.513807935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:20:13.890612 containerd[1611]: time="2025-10-27T08:20:13.890527409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:13.892221 containerd[1611]: time="2025-10-27T08:20:13.892151127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:20:13.897176 containerd[1611]: time="2025-10-27T08:20:13.897114132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:20:13.897520 kubelet[2803]: E1027 08:20:13.897421 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:20:13.897618 kubelet[2803]: E1027 08:20:13.897524 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:20:13.902686 kubelet[2803]: E1027 08:20:13.902635 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cc141c34926d4069899023aa88fe0b1a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrhnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94557dddf-cnjrr_calico-system(0b6943bc-5290-49e7-ad2e-2226e6164e9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:13.905713 containerd[1611]: time="2025-10-27T08:20:13.905649523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:20:14.081437 kubelet[2803]: I1027 08:20:14.081355 2803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05a006b1-9824-4b9a-80e0-9406b9da0421" path="/var/lib/kubelet/pods/05a006b1-9824-4b9a-80e0-9406b9da0421/volumes" Oct 27 08:20:14.082360 kubelet[2803]: E1027 08:20:14.082262 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:14.082569 containerd[1611]: time="2025-10-27T08:20:14.082526970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rk84,Uid:69450d0b-ab93-42f2-8967-aaf9ddd99661,Namespace:kube-system,Attempt:0,}" Oct 27 08:20:14.082658 containerd[1611]: time="2025-10-27T08:20:14.082622760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q42qv,Uid:8832f7e4-0882-4808-9716-2c453d412432,Namespace:calico-system,Attempt:0,}" Oct 27 08:20:14.082858 containerd[1611]: time="2025-10-27T08:20:14.082549261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76bd5dfdc6-jcthj,Uid:89b2c447-220a-4d0e-8d3d-30370d7bddf9,Namespace:calico-system,Attempt:0,}" Oct 27 08:20:14.228015 systemd-networkd[1513]: calied2ff181166: Link UP Oct 27 08:20:14.229300 systemd-networkd[1513]: calied2ff181166: Gained carrier Oct 27 08:20:14.243832 containerd[1611]: 2025-10-27 08:20:14.146 [INFO][4307] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--q42qv-eth0 goldmane-666569f655- calico-system 8832f7e4-0882-4808-9716-2c453d412432 851 0 2025-10-27 08:19:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-q42qv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calied2ff181166 [] [] }} ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Namespace="calico-system" Pod="goldmane-666569f655-q42qv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q42qv-" Oct 27 08:20:14.243832 containerd[1611]: 2025-10-27 08:20:14.147 [INFO][4307] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Namespace="calico-system" Pod="goldmane-666569f655-q42qv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q42qv-eth0" Oct 27 08:20:14.243832 containerd[1611]: 2025-10-27 08:20:14.189 [INFO][4338] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" HandleID="k8s-pod-network.e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Workload="localhost-k8s-goldmane--666569f655--q42qv-eth0" Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.189 [INFO][4338] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" HandleID="k8s-pod-network.e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Workload="localhost-k8s-goldmane--666569f655--q42qv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d90e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-q42qv", "timestamp":"2025-10-27 08:20:14.18915907 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.189 [INFO][4338] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.189 [INFO][4338] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.189 [INFO][4338] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.197 [INFO][4338] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" host="localhost" Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.201 [INFO][4338] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.204 [INFO][4338] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.206 [INFO][4338] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.208 [INFO][4338] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:14.244052 containerd[1611]: 2025-10-27 08:20:14.208 [INFO][4338] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" host="localhost" Oct 27 08:20:14.244259 containerd[1611]: 2025-10-27 08:20:14.209 [INFO][4338] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e Oct 27 08:20:14.244259 containerd[1611]: 2025-10-27 08:20:14.213 [INFO][4338] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" host="localhost" Oct 27 08:20:14.244259 containerd[1611]: 2025-10-27 08:20:14.217 [INFO][4338] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" host="localhost" Oct 27 08:20:14.244259 containerd[1611]: 2025-10-27 08:20:14.218 [INFO][4338] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" host="localhost" Oct 27 08:20:14.244259 containerd[1611]: 2025-10-27 08:20:14.218 [INFO][4338] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:20:14.244259 containerd[1611]: 2025-10-27 08:20:14.218 [INFO][4338] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" HandleID="k8s-pod-network.e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Workload="localhost-k8s-goldmane--666569f655--q42qv-eth0" Oct 27 08:20:14.244401 containerd[1611]: 2025-10-27 08:20:14.225 [INFO][4307] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Namespace="calico-system" Pod="goldmane-666569f655-q42qv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q42qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--q42qv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8832f7e4-0882-4808-9716-2c453d412432", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-q42qv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calied2ff181166", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:14.244401 containerd[1611]: 2025-10-27 08:20:14.225 [INFO][4307] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Namespace="calico-system" Pod="goldmane-666569f655-q42qv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q42qv-eth0" Oct 27 08:20:14.244508 containerd[1611]: 2025-10-27 08:20:14.225 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied2ff181166 ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Namespace="calico-system" Pod="goldmane-666569f655-q42qv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q42qv-eth0" Oct 27 08:20:14.244508 containerd[1611]: 2025-10-27 08:20:14.230 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Namespace="calico-system" Pod="goldmane-666569f655-q42qv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q42qv-eth0" Oct 27 08:20:14.244572 containerd[1611]: 2025-10-27 08:20:14.230 [INFO][4307] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Namespace="calico-system" Pod="goldmane-666569f655-q42qv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q42qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--q42qv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8832f7e4-0882-4808-9716-2c453d412432", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e", Pod:"goldmane-666569f655-q42qv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calied2ff181166", MAC:"f2:34:bb:d3:80:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:14.244627 containerd[1611]: 2025-10-27 08:20:14.238 [INFO][4307] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" Namespace="calico-system" Pod="goldmane-666569f655-q42qv" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--q42qv-eth0" Oct 27 08:20:14.270562 containerd[1611]: time="2025-10-27T08:20:14.270502397Z" level=info msg="connecting to shim e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e" address="unix:///run/containerd/s/ff3c37cd5874e9dd6a9f61809f9de05a812d2851f729040f1dac26c33e742584" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:20:14.301677 systemd[1]: Started cri-containerd-e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e.scope - libcontainer container e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e. Oct 27 08:20:14.318624 containerd[1611]: time="2025-10-27T08:20:14.318320323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:14.319134 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:20:14.319526 containerd[1611]: time="2025-10-27T08:20:14.319492433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:20:14.319592 containerd[1611]: time="2025-10-27T08:20:14.319502822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:20:14.319858 kubelet[2803]: E1027 08:20:14.319793 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:20:14.320371 kubelet[2803]: E1027 08:20:14.319859 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:20:14.320437 kubelet[2803]: E1027 08:20:14.320015 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrhnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94557dddf-cnjrr_calico-system(0b6943bc-5290-49e7-ad2e-2226e6164e9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:14.321549 kubelet[2803]: E1027 08:20:14.321496 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-94557dddf-cnjrr" podUID="0b6943bc-5290-49e7-ad2e-2226e6164e9a" Oct 27 08:20:14.336050 systemd-networkd[1513]: cali58ef157d29c: Link UP Oct 27 08:20:14.337549 systemd-networkd[1513]: cali58ef157d29c: Gained carrier Oct 27 08:20:14.360555 containerd[1611]: 2025-10-27 08:20:14.140 [INFO][4305] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0 calico-kube-controllers-76bd5dfdc6- calico-system 89b2c447-220a-4d0e-8d3d-30370d7bddf9 846 0 2025-10-27 08:19:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76bd5dfdc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-76bd5dfdc6-jcthj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali58ef157d29c [] [] }} ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Namespace="calico-system" Pod="calico-kube-controllers-76bd5dfdc6-jcthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-" Oct 27 08:20:14.360555 containerd[1611]: 2025-10-27 08:20:14.140 [INFO][4305] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Namespace="calico-system" Pod="calico-kube-controllers-76bd5dfdc6-jcthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" Oct 27 08:20:14.360555 containerd[1611]: 2025-10-27 08:20:14.193 [INFO][4336] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" HandleID="k8s-pod-network.18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Workload="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.194 [INFO][4336] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" HandleID="k8s-pod-network.18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Workload="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-76bd5dfdc6-jcthj", "timestamp":"2025-10-27 08:20:14.193900599 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.194 [INFO][4336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.218 [INFO][4336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.219 [INFO][4336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.298 [INFO][4336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" host="localhost" Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.303 [INFO][4336] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.308 [INFO][4336] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.310 [INFO][4336] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.311 [INFO][4336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:14.360783 containerd[1611]: 2025-10-27 08:20:14.311 [INFO][4336] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" host="localhost" Oct 27 08:20:14.361087 containerd[1611]: 2025-10-27 08:20:14.313 [INFO][4336] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c Oct 27 08:20:14.361087 containerd[1611]: 2025-10-27 08:20:14.317 [INFO][4336] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" host="localhost" Oct 27 08:20:14.361087 containerd[1611]: 2025-10-27 08:20:14.324 [INFO][4336] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" host="localhost" Oct 27 08:20:14.361087 containerd[1611]: 2025-10-27 08:20:14.324 [INFO][4336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" host="localhost" Oct 27 08:20:14.361087 containerd[1611]: 2025-10-27 08:20:14.325 [INFO][4336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:20:14.361087 containerd[1611]: 2025-10-27 08:20:14.325 [INFO][4336] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" HandleID="k8s-pod-network.18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Workload="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" Oct 27 08:20:14.361206 containerd[1611]: 2025-10-27 08:20:14.332 [INFO][4305] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Namespace="calico-system" Pod="calico-kube-controllers-76bd5dfdc6-jcthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0", GenerateName:"calico-kube-controllers-76bd5dfdc6-", Namespace:"calico-system", SelfLink:"", UID:"89b2c447-220a-4d0e-8d3d-30370d7bddf9", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76bd5dfdc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-76bd5dfdc6-jcthj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58ef157d29c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:14.361259 containerd[1611]: 2025-10-27 08:20:14.332 [INFO][4305] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Namespace="calico-system" Pod="calico-kube-controllers-76bd5dfdc6-jcthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" Oct 27 08:20:14.361259 containerd[1611]: 2025-10-27 08:20:14.332 [INFO][4305] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58ef157d29c ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Namespace="calico-system" Pod="calico-kube-controllers-76bd5dfdc6-jcthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" Oct 27 08:20:14.361259 containerd[1611]: 2025-10-27 08:20:14.338 [INFO][4305] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Namespace="calico-system" Pod="calico-kube-controllers-76bd5dfdc6-jcthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" Oct 27 08:20:14.361324 containerd[1611]: 2025-10-27 08:20:14.339 [INFO][4305] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Namespace="calico-system" Pod="calico-kube-controllers-76bd5dfdc6-jcthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0", GenerateName:"calico-kube-controllers-76bd5dfdc6-", Namespace:"calico-system", SelfLink:"", UID:"89b2c447-220a-4d0e-8d3d-30370d7bddf9", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76bd5dfdc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c", Pod:"calico-kube-controllers-76bd5dfdc6-jcthj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali58ef157d29c", MAC:"ca:fd:88:95:30:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:14.361423 containerd[1611]: 2025-10-27 08:20:14.349 [INFO][4305] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" Namespace="calico-system" Pod="calico-kube-controllers-76bd5dfdc6-jcthj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bd5dfdc6--jcthj-eth0" Oct 27 08:20:14.361423 containerd[1611]: time="2025-10-27T08:20:14.361222644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q42qv,Uid:8832f7e4-0882-4808-9716-2c453d412432,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4f43c7dcebba5a10650542b380bdac9dd8350b15ee94c0ea608d391b211359e\"" Oct 27 08:20:14.363449 containerd[1611]: time="2025-10-27T08:20:14.363428283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:20:14.382632 containerd[1611]: time="2025-10-27T08:20:14.382586860Z" level=info msg="connecting to shim 18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c" address="unix:///run/containerd/s/4f0219e9151517aa93be9e6e6a3b04923981e5c0d0f60086cc5d89f61571a6d9" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:20:14.407636 systemd[1]: Started cri-containerd-18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c.scope - libcontainer container 18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c. Oct 27 08:20:14.466692 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:20:14.468201 systemd-networkd[1513]: cali61252e99ce9: Link UP Oct 27 08:20:14.468555 systemd-networkd[1513]: cali61252e99ce9: Gained carrier Oct 27 08:20:14.482525 containerd[1611]: 2025-10-27 08:20:14.151 [INFO][4295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4rk84-eth0 coredns-674b8bbfcf- kube-system 69450d0b-ab93-42f2-8967-aaf9ddd99661 841 0 2025-10-27 08:19:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4rk84 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali61252e99ce9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rk84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rk84-" Oct 27 08:20:14.482525 containerd[1611]: 2025-10-27 08:20:14.151 [INFO][4295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rk84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" Oct 27 08:20:14.482525 containerd[1611]: 2025-10-27 08:20:14.195 [INFO][4349] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" HandleID="k8s-pod-network.c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Workload="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.195 [INFO][4349] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" HandleID="k8s-pod-network.c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Workload="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4rk84", "timestamp":"2025-10-27 08:20:14.195764988 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.196 [INFO][4349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.324 [INFO][4349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.325 [INFO][4349] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.401 [INFO][4349] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" host="localhost" Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.418 [INFO][4349] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.430 [INFO][4349] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.434 [INFO][4349] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.436 [INFO][4349] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:14.483010 containerd[1611]: 2025-10-27 08:20:14.438 [INFO][4349] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" host="localhost" Oct 27 08:20:14.483225 containerd[1611]: 2025-10-27 08:20:14.440 [INFO][4349] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c Oct 27 08:20:14.483225 containerd[1611]: 2025-10-27 08:20:14.448 [INFO][4349] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" host="localhost" Oct 27 08:20:14.483225 containerd[1611]: 2025-10-27 08:20:14.456 [INFO][4349] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" host="localhost" Oct 27 08:20:14.483225 containerd[1611]: 2025-10-27 08:20:14.457 [INFO][4349] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" host="localhost" Oct 27 08:20:14.483225 containerd[1611]: 2025-10-27 08:20:14.457 [INFO][4349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:20:14.483225 containerd[1611]: 2025-10-27 08:20:14.457 [INFO][4349] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" HandleID="k8s-pod-network.c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Workload="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" Oct 27 08:20:14.483346 containerd[1611]: 2025-10-27 08:20:14.463 [INFO][4295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rk84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4rk84-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69450d0b-ab93-42f2-8967-aaf9ddd99661", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4rk84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61252e99ce9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:14.483417 containerd[1611]: 2025-10-27 08:20:14.463 [INFO][4295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rk84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" Oct 27 08:20:14.483417 containerd[1611]: 2025-10-27 08:20:14.463 [INFO][4295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61252e99ce9 ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rk84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" Oct 27 08:20:14.483417 containerd[1611]: 2025-10-27 08:20:14.469 [INFO][4295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rk84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" Oct 27 08:20:14.483533 containerd[1611]: 2025-10-27 08:20:14.470 [INFO][4295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rk84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4rk84-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"69450d0b-ab93-42f2-8967-aaf9ddd99661", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c", Pod:"coredns-674b8bbfcf-4rk84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali61252e99ce9", MAC:"ae:b7:a9:9a:c7:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:14.483533 containerd[1611]: 2025-10-27 08:20:14.478 [INFO][4295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4rk84" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4rk84-eth0" Oct 27 08:20:14.514334 containerd[1611]: time="2025-10-27T08:20:14.514174858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76bd5dfdc6-jcthj,Uid:89b2c447-220a-4d0e-8d3d-30370d7bddf9,Namespace:calico-system,Attempt:0,} returns sandbox id \"18b37cb98b44f762cf0008bd6b650ebc7f1e01659ba21ea54213a24e3a88bd3c\"" Oct 27 08:20:14.515693 containerd[1611]: time="2025-10-27T08:20:14.515221292Z" level=info msg="connecting to shim c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c" address="unix:///run/containerd/s/a89751c8397f73fde0b42da631abe5074fc4be0e42511d9e248d78218f816205" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:20:14.565961 systemd[1]: Started cri-containerd-c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c.scope - libcontainer container c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c. Oct 27 08:20:14.580868 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:20:14.782280 containerd[1611]: time="2025-10-27T08:20:14.782224841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4rk84,Uid:69450d0b-ab93-42f2-8967-aaf9ddd99661,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c\"" Oct 27 08:20:14.783221 kubelet[2803]: E1027 08:20:14.783189 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:14.888517 containerd[1611]: time="2025-10-27T08:20:14.888447894Z" level=info msg="CreateContainer within sandbox \"c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:20:14.917764 containerd[1611]: time="2025-10-27T08:20:14.917725553Z" level=info msg="Container 66358dc8e6d7e95dba4af5a16dc438dff4da2ebb016cc8a3382d542a5629b2e5: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:20:14.919660 systemd-networkd[1513]: vxlan.calico: Gained IPv6LL Oct 27 08:20:14.924788 containerd[1611]: time="2025-10-27T08:20:14.924740358Z" level=info msg="CreateContainer within sandbox \"c3837e0d5b32756c7392a3cf7e4db36b239fa7e6c5a5ca08f9aadf5c2631357c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66358dc8e6d7e95dba4af5a16dc438dff4da2ebb016cc8a3382d542a5629b2e5\"" Oct 27 08:20:14.926683 containerd[1611]: time="2025-10-27T08:20:14.926655022Z" level=info msg="StartContainer for \"66358dc8e6d7e95dba4af5a16dc438dff4da2ebb016cc8a3382d542a5629b2e5\"" Oct 27 08:20:14.928444 containerd[1611]: time="2025-10-27T08:20:14.928418231Z" level=info msg="connecting to shim 66358dc8e6d7e95dba4af5a16dc438dff4da2ebb016cc8a3382d542a5629b2e5" address="unix:///run/containerd/s/a89751c8397f73fde0b42da631abe5074fc4be0e42511d9e248d78218f816205" protocol=ttrpc version=3 Oct 27 08:20:14.947551 containerd[1611]: time="2025-10-27T08:20:14.947513279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:14.948565 containerd[1611]: time="2025-10-27T08:20:14.948528464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:20:14.948624 containerd[1611]: time="2025-10-27T08:20:14.948563099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:20:14.948745 kubelet[2803]: E1027 08:20:14.948705 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:20:14.948798 kubelet[2803]: E1027 08:20:14.948752 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:20:14.949007 kubelet[2803]: E1027 08:20:14.948964 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gbrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q42qv_calico-system(8832f7e4-0882-4808-9716-2c453d412432): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:14.949344 containerd[1611]: time="2025-10-27T08:20:14.949310342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:20:14.950214 kubelet[2803]: E1027 08:20:14.950186 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q42qv" podUID="8832f7e4-0882-4808-9716-2c453d412432" Oct 27 08:20:14.950622 systemd[1]: Started cri-containerd-66358dc8e6d7e95dba4af5a16dc438dff4da2ebb016cc8a3382d542a5629b2e5.scope - libcontainer container 66358dc8e6d7e95dba4af5a16dc438dff4da2ebb016cc8a3382d542a5629b2e5. Oct 27 08:20:14.985566 containerd[1611]: time="2025-10-27T08:20:14.985519881Z" level=info msg="StartContainer for \"66358dc8e6d7e95dba4af5a16dc438dff4da2ebb016cc8a3382d542a5629b2e5\" returns successfully" Oct 27 08:20:15.239717 systemd-networkd[1513]: cali1b2b7331b60: Gained IPv6LL Oct 27 08:20:15.256881 kubelet[2803]: E1027 08:20:15.256818 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q42qv" podUID="8832f7e4-0882-4808-9716-2c453d412432" Oct 27 08:20:15.259918 kubelet[2803]: E1027 08:20:15.259374 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:15.261061 kubelet[2803]: E1027 08:20:15.260985 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-94557dddf-cnjrr" podUID="0b6943bc-5290-49e7-ad2e-2226e6164e9a" Oct 27 08:20:15.312919 containerd[1611]: time="2025-10-27T08:20:15.312846057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:15.495749 systemd-networkd[1513]: calied2ff181166: Gained IPv6LL Oct 27 08:20:15.571274 kubelet[2803]: I1027 08:20:15.570441 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4rk84" podStartSLOduration=39.570423051 podStartE2EDuration="39.570423051s" podCreationTimestamp="2025-10-27 08:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:20:15.570027408 +0000 UTC m=+45.604662336" watchObservedRunningTime="2025-10-27 08:20:15.570423051 +0000 UTC m=+45.605057979" Oct 27 08:20:15.590020 containerd[1611]: time="2025-10-27T08:20:15.589940159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:20:15.590532 containerd[1611]: time="2025-10-27T08:20:15.590062880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:20:15.590569 kubelet[2803]: E1027 08:20:15.590294 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:20:15.590569 kubelet[2803]: E1027 08:20:15.590367 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:20:15.590569 kubelet[2803]: E1027 08:20:15.590529 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dctvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76bd5dfdc6-jcthj_calico-system(89b2c447-220a-4d0e-8d3d-30370d7bddf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:15.592050 kubelet[2803]: E1027 08:20:15.591993 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" podUID="89b2c447-220a-4d0e-8d3d-30370d7bddf9" Oct 27 08:20:15.943741 systemd-networkd[1513]: cali61252e99ce9: Gained IPv6LL Oct 27 08:20:15.944979 systemd-networkd[1513]: cali58ef157d29c: Gained IPv6LL Oct 27 08:20:16.082383 containerd[1611]: time="2025-10-27T08:20:16.082319338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2dvr,Uid:35ccb1c2-1c56-4133-a090-83b933f5454f,Namespace:calico-system,Attempt:0,}" Oct 27 08:20:16.186160 systemd-networkd[1513]: califf41e5bedcf: Link UP Oct 27 08:20:16.186423 systemd-networkd[1513]: califf41e5bedcf: Gained carrier Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.119 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--r2dvr-eth0 csi-node-driver- calico-system 35ccb1c2-1c56-4133-a090-83b933f5454f 734 0 2025-10-27 08:19:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-r2dvr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califf41e5bedcf [] [] }} ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Namespace="calico-system" Pod="csi-node-driver-r2dvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2dvr-" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.119 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Namespace="calico-system" Pod="csi-node-driver-r2dvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2dvr-eth0" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.149 [INFO][4582] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" HandleID="k8s-pod-network.35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Workload="localhost-k8s-csi--node--driver--r2dvr-eth0" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.149 [INFO][4582] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" HandleID="k8s-pod-network.35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Workload="localhost-k8s-csi--node--driver--r2dvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-r2dvr", "timestamp":"2025-10-27 08:20:16.149484082 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.149 [INFO][4582] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.149 [INFO][4582] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.149 [INFO][4582] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.156 [INFO][4582] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" host="localhost" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.160 [INFO][4582] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.164 [INFO][4582] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.166 [INFO][4582] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.168 [INFO][4582] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.168 [INFO][4582] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" host="localhost" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.169 [INFO][4582] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.173 [INFO][4582] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" host="localhost" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.179 [INFO][4582] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" host="localhost" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.179 [INFO][4582] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" host="localhost" Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.180 [INFO][4582] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:20:16.202140 containerd[1611]: 2025-10-27 08:20:16.180 [INFO][4582] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" HandleID="k8s-pod-network.35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Workload="localhost-k8s-csi--node--driver--r2dvr-eth0" Oct 27 08:20:16.202793 containerd[1611]: 2025-10-27 08:20:16.183 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Namespace="calico-system" Pod="csi-node-driver-r2dvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2dvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r2dvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35ccb1c2-1c56-4133-a090-83b933f5454f", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-r2dvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califf41e5bedcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:16.202793 containerd[1611]: 2025-10-27 08:20:16.183 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Namespace="calico-system" Pod="csi-node-driver-r2dvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2dvr-eth0" Oct 27 08:20:16.202793 containerd[1611]: 2025-10-27 08:20:16.183 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf41e5bedcf ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Namespace="calico-system" Pod="csi-node-driver-r2dvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2dvr-eth0" Oct 27 08:20:16.202793 containerd[1611]: 2025-10-27 08:20:16.187 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Namespace="calico-system" Pod="csi-node-driver-r2dvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2dvr-eth0" Oct 27 08:20:16.202793 containerd[1611]: 2025-10-27 08:20:16.187 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Namespace="calico-system" Pod="csi-node-driver-r2dvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2dvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r2dvr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35ccb1c2-1c56-4133-a090-83b933f5454f", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac", Pod:"csi-node-driver-r2dvr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califf41e5bedcf", MAC:"ae:68:83:f0:b3:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:16.202793 containerd[1611]: 2025-10-27 08:20:16.195 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" Namespace="calico-system" Pod="csi-node-driver-r2dvr" WorkloadEndpoint="localhost-k8s-csi--node--driver--r2dvr-eth0" Oct 27 08:20:16.223841 containerd[1611]: time="2025-10-27T08:20:16.223800779Z" level=info msg="connecting to shim 35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac" address="unix:///run/containerd/s/9323ca05621c89ba9dac54eb295664cb40de30db6ea8ad7b61feab83aa914f14" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:20:16.249624 systemd[1]: Started cri-containerd-35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac.scope - libcontainer container 35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac. Oct 27 08:20:16.261597 kubelet[2803]: E1027 08:20:16.261554 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" podUID="89b2c447-220a-4d0e-8d3d-30370d7bddf9" Oct 27 08:20:16.262020 kubelet[2803]: E1027 08:20:16.261764 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:16.264274 kubelet[2803]: E1027 08:20:16.264182 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q42qv" podUID="8832f7e4-0882-4808-9716-2c453d412432" Oct 27 08:20:16.264962 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:20:16.294726 containerd[1611]: time="2025-10-27T08:20:16.294679673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2dvr,Uid:35ccb1c2-1c56-4133-a090-83b933f5454f,Namespace:calico-system,Attempt:0,} returns sandbox id \"35407f5f8ff93cae10abcf3c89dcff99d4ae07606855ae70604d5e9bc3991cac\"" Oct 27 08:20:16.296660 containerd[1611]: time="2025-10-27T08:20:16.296620806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:20:16.664463 containerd[1611]: time="2025-10-27T08:20:16.664391242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:16.665551 containerd[1611]: time="2025-10-27T08:20:16.665506054Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:20:16.665610 containerd[1611]: time="2025-10-27T08:20:16.665555486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:20:16.665818 kubelet[2803]: E1027 08:20:16.665762 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:20:16.666074 kubelet[2803]: E1027 08:20:16.665823 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:20:16.666074 kubelet[2803]: E1027 08:20:16.665995 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db7b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2dvr_calico-system(35ccb1c2-1c56-4133-a090-83b933f5454f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:16.668164 containerd[1611]: time="2025-10-27T08:20:16.668127252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:20:16.984337 containerd[1611]: time="2025-10-27T08:20:16.984136572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:16.985435 containerd[1611]: time="2025-10-27T08:20:16.985357543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:20:16.985636 containerd[1611]: time="2025-10-27T08:20:16.985435839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:20:16.985735 kubelet[2803]: E1027 08:20:16.985682 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:20:16.985808 kubelet[2803]: E1027 08:20:16.985749 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:20:16.986083 kubelet[2803]: E1027 08:20:16.985998 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db7b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2dvr_calico-system(35ccb1c2-1c56-4133-a090-83b933f5454f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:16.987913 kubelet[2803]: E1027 08:20:16.987842 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:20:17.078689 kubelet[2803]: E1027 08:20:17.078615 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:17.079100 containerd[1611]: time="2025-10-27T08:20:17.078978911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5854b59-xhcvf,Uid:7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:20:17.079100 containerd[1611]: time="2025-10-27T08:20:17.079065613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5854b59-mtcd9,Uid:6dad6e5f-6112-429b-aab8-41593a07cb3d,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:20:17.079708 containerd[1611]: time="2025-10-27T08:20:17.079546336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wg2gg,Uid:6444bf74-ad4b-45b5-b1d7-45b95c454a19,Namespace:kube-system,Attempt:0,}" Oct 27 08:20:17.266107 kubelet[2803]: E1027 08:20:17.265962 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:17.270803 kubelet[2803]: E1027 08:20:17.270714 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:20:17.406713 systemd-networkd[1513]: calib4ab2c33d89: Link UP Oct 27 08:20:17.407540 systemd-networkd[1513]: calib4ab2c33d89: Gained carrier Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.145 [INFO][4658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0 calico-apiserver-57d5854b59- calico-apiserver 7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e 850 0 2025-10-27 08:19:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d5854b59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57d5854b59-xhcvf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4ab2c33d89 [] [] }} ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-xhcvf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.146 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-xhcvf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.192 [INFO][4694] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" HandleID="k8s-pod-network.d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Workload="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.195 [INFO][4694] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" HandleID="k8s-pod-network.d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Workload="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7380), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57d5854b59-xhcvf", "timestamp":"2025-10-27 08:20:17.192080734 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.195 [INFO][4694] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.195 [INFO][4694] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.196 [INFO][4694] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.223 [INFO][4694] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" host="localhost" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.254 [INFO][4694] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.258 [INFO][4694] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.260 [INFO][4694] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.263 [INFO][4694] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.264 [INFO][4694] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" host="localhost" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.265 [INFO][4694] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.310 [INFO][4694] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" host="localhost" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.397 [INFO][4694] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" host="localhost" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.397 [INFO][4694] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" host="localhost" Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.397 [INFO][4694] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:20:17.434530 containerd[1611]: 2025-10-27 08:20:17.397 [INFO][4694] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" HandleID="k8s-pod-network.d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Workload="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" Oct 27 08:20:17.435524 containerd[1611]: 2025-10-27 08:20:17.401 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-xhcvf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0", GenerateName:"calico-apiserver-57d5854b59-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5854b59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57d5854b59-xhcvf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4ab2c33d89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:17.435524 containerd[1611]: 2025-10-27 08:20:17.402 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-xhcvf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" Oct 27 08:20:17.435524 containerd[1611]: 2025-10-27 08:20:17.402 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4ab2c33d89 ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-xhcvf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" Oct 27 08:20:17.435524 containerd[1611]: 2025-10-27 08:20:17.407 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-xhcvf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" Oct 27 08:20:17.435524 containerd[1611]: 2025-10-27 08:20:17.408 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-xhcvf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0", GenerateName:"calico-apiserver-57d5854b59-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5854b59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c", Pod:"calico-apiserver-57d5854b59-xhcvf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4ab2c33d89", MAC:"d6:91:0d:01:5b:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:17.435524 containerd[1611]: 2025-10-27 08:20:17.429 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-xhcvf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--xhcvf-eth0" Oct 27 08:20:17.624538 systemd-networkd[1513]: calif0612627b03: Link UP Oct 27 08:20:17.627729 systemd-networkd[1513]: calif0612627b03: Gained carrier Oct 27 08:20:17.657548 containerd[1611]: time="2025-10-27T08:20:17.657483955Z" level=info msg="connecting to shim d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c" address="unix:///run/containerd/s/7f5457f326b688af2cd17189be2cf917a87a03ea6faec8f7cba70e212c61435d" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.153 [INFO][4648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0 calico-apiserver-57d5854b59- calico-apiserver 6dad6e5f-6112-429b-aab8-41593a07cb3d 847 0 2025-10-27 08:19:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d5854b59 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57d5854b59-mtcd9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0612627b03 [] [] }} ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-mtcd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.153 [INFO][4648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-mtcd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.196 [INFO][4700] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" HandleID="k8s-pod-network.736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Workload="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.196 [INFO][4700] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" HandleID="k8s-pod-network.736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Workload="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57d5854b59-mtcd9", "timestamp":"2025-10-27 08:20:17.19633108 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.196 [INFO][4700] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.397 [INFO][4700] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.398 [INFO][4700] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.417 [INFO][4700] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" host="localhost" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.434 [INFO][4700] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.441 [INFO][4700] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.442 [INFO][4700] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.445 [INFO][4700] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.445 [INFO][4700] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" host="localhost" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.447 [INFO][4700] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.478 [INFO][4700] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" host="localhost" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.611 [INFO][4700] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" host="localhost" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.611 [INFO][4700] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" host="localhost" Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.611 [INFO][4700] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:20:17.658441 containerd[1611]: 2025-10-27 08:20:17.611 [INFO][4700] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" HandleID="k8s-pod-network.736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Workload="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" Oct 27 08:20:17.659379 containerd[1611]: 2025-10-27 08:20:17.617 [INFO][4648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-mtcd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0", GenerateName:"calico-apiserver-57d5854b59-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dad6e5f-6112-429b-aab8-41593a07cb3d", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5854b59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57d5854b59-mtcd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0612627b03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:17.659379 containerd[1611]: 2025-10-27 08:20:17.617 [INFO][4648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-mtcd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" Oct 27 08:20:17.659379 containerd[1611]: 2025-10-27 08:20:17.617 [INFO][4648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0612627b03 ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-mtcd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" Oct 27 08:20:17.659379 containerd[1611]: 2025-10-27 08:20:17.629 [INFO][4648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-mtcd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" Oct 27 08:20:17.659379 containerd[1611]: 2025-10-27 08:20:17.629 [INFO][4648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-mtcd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0", GenerateName:"calico-apiserver-57d5854b59-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dad6e5f-6112-429b-aab8-41593a07cb3d", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5854b59", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a", Pod:"calico-apiserver-57d5854b59-mtcd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0612627b03", MAC:"ea:e9:0d:70:09:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:17.659379 containerd[1611]: 2025-10-27 08:20:17.648 [INFO][4648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" Namespace="calico-apiserver" Pod="calico-apiserver-57d5854b59-mtcd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5854b59--mtcd9-eth0" Oct 27 08:20:17.671826 systemd-networkd[1513]: califf41e5bedcf: Gained IPv6LL Oct 27 08:20:17.697106 containerd[1611]: time="2025-10-27T08:20:17.696998826Z" level=info msg="connecting to shim 736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a" address="unix:///run/containerd/s/000186e50e3689df8ae7e0c3817c2e393a26807de6f2188fa459234472857a30" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:20:17.701521 systemd-networkd[1513]: calie783f464ad9: Link UP Oct 27 08:20:17.703324 systemd-networkd[1513]: calie783f464ad9: Gained carrier Oct 27 08:20:17.711009 systemd[1]: Started cri-containerd-d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c.scope - libcontainer container d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c. Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.150 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0 coredns-674b8bbfcf- kube-system 6444bf74-ad4b-45b5-b1d7-45b95c454a19 849 0 2025-10-27 08:19:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wg2gg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie783f464ad9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Namespace="kube-system" Pod="coredns-674b8bbfcf-wg2gg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wg2gg-" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.150 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Namespace="kube-system" Pod="coredns-674b8bbfcf-wg2gg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.217 [INFO][4702] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" HandleID="k8s-pod-network.25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Workload="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.218 [INFO][4702] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" HandleID="k8s-pod-network.25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Workload="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039bc80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wg2gg", "timestamp":"2025-10-27 08:20:17.217815777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.218 [INFO][4702] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.612 [INFO][4702] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.612 [INFO][4702] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.625 [INFO][4702] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" host="localhost" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.635 [INFO][4702] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.651 [INFO][4702] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.658 [INFO][4702] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.660 [INFO][4702] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.661 [INFO][4702] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" host="localhost" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.663 [INFO][4702] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180 Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.668 [INFO][4702] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" host="localhost" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.677 [INFO][4702] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" host="localhost" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.678 [INFO][4702] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" host="localhost" Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.678 [INFO][4702] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:20:17.724539 containerd[1611]: 2025-10-27 08:20:17.679 [INFO][4702] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" HandleID="k8s-pod-network.25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Workload="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" Oct 27 08:20:17.725746 containerd[1611]: 2025-10-27 08:20:17.692 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Namespace="kube-system" Pod="coredns-674b8bbfcf-wg2gg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6444bf74-ad4b-45b5-b1d7-45b95c454a19", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wg2gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie783f464ad9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:17.725746 containerd[1611]: 2025-10-27 08:20:17.692 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Namespace="kube-system" Pod="coredns-674b8bbfcf-wg2gg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" Oct 27 08:20:17.725746 containerd[1611]: 2025-10-27 08:20:17.694 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie783f464ad9 ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Namespace="kube-system" Pod="coredns-674b8bbfcf-wg2gg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" Oct 27 08:20:17.725746 containerd[1611]: 2025-10-27 08:20:17.704 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Namespace="kube-system" Pod="coredns-674b8bbfcf-wg2gg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" Oct 27 08:20:17.725746 containerd[1611]: 2025-10-27 08:20:17.704 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Namespace="kube-system" Pod="coredns-674b8bbfcf-wg2gg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6444bf74-ad4b-45b5-b1d7-45b95c454a19", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 19, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180", Pod:"coredns-674b8bbfcf-wg2gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie783f464ad9", MAC:"2a:02:7a:3d:81:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:20:17.725746 containerd[1611]: 2025-10-27 08:20:17.718 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" Namespace="kube-system" Pod="coredns-674b8bbfcf-wg2gg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wg2gg-eth0" Oct 27 08:20:17.758782 systemd[1]: Started cri-containerd-736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a.scope - libcontainer container 736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a. Oct 27 08:20:17.770498 containerd[1611]: time="2025-10-27T08:20:17.770409506Z" level=info msg="connecting to shim 25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180" address="unix:///run/containerd/s/674db11bed9fe739c7e33c8cbb3a0294d78516758cacde0f48e49cf12df95d84" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:20:17.770803 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:20:17.797830 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:20:17.805787 systemd[1]: Started cri-containerd-25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180.scope - libcontainer container 25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180. Oct 27 08:20:17.827411 systemd-resolved[1300]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 08:20:17.949945 containerd[1611]: time="2025-10-27T08:20:17.949790257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5854b59-xhcvf,Uid:7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d03c8065a4f1461f230f8cae73e1754dab146ee6194c3d24222e158b80831a8c\"" Oct 27 08:20:17.952037 containerd[1611]: time="2025-10-27T08:20:17.951984505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:20:17.954359 containerd[1611]: time="2025-10-27T08:20:17.954281245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5854b59-mtcd9,Uid:6dad6e5f-6112-429b-aab8-41593a07cb3d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"736ff779ed555ba31357527c4065d2b13c91139d95c71e5c3cbfca200a55770a\"" Oct 27 08:20:17.956388 containerd[1611]: time="2025-10-27T08:20:17.956319740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wg2gg,Uid:6444bf74-ad4b-45b5-b1d7-45b95c454a19,Namespace:kube-system,Attempt:0,} returns sandbox id \"25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180\"" Oct 27 08:20:17.957043 kubelet[2803]: E1027 08:20:17.956993 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:17.961072 containerd[1611]: time="2025-10-27T08:20:17.961013749Z" level=info msg="CreateContainer within sandbox \"25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:20:17.971346 containerd[1611]: time="2025-10-27T08:20:17.971286877Z" level=info msg="Container ee415449aef8d1b9f8b1163642f67582feda03c1a0c523ee68f5c47a1c0cec73: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:20:17.979055 containerd[1611]: time="2025-10-27T08:20:17.979000833Z" level=info msg="CreateContainer within sandbox \"25c4694cbf85e02b0d1ed37a14c424512bc534f6ddf3c17e082dae2bde709180\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee415449aef8d1b9f8b1163642f67582feda03c1a0c523ee68f5c47a1c0cec73\"" Oct 27 08:20:17.979676 containerd[1611]: time="2025-10-27T08:20:17.979634131Z" level=info msg="StartContainer for \"ee415449aef8d1b9f8b1163642f67582feda03c1a0c523ee68f5c47a1c0cec73\"" Oct 27 08:20:17.980796 containerd[1611]: time="2025-10-27T08:20:17.980732322Z" level=info msg="connecting to shim ee415449aef8d1b9f8b1163642f67582feda03c1a0c523ee68f5c47a1c0cec73" address="unix:///run/containerd/s/674db11bed9fe739c7e33c8cbb3a0294d78516758cacde0f48e49cf12df95d84" protocol=ttrpc version=3 Oct 27 08:20:18.005682 systemd[1]: Started cri-containerd-ee415449aef8d1b9f8b1163642f67582feda03c1a0c523ee68f5c47a1c0cec73.scope - libcontainer container ee415449aef8d1b9f8b1163642f67582feda03c1a0c523ee68f5c47a1c0cec73. Oct 27 08:20:18.045178 containerd[1611]: time="2025-10-27T08:20:18.045136878Z" level=info msg="StartContainer for \"ee415449aef8d1b9f8b1163642f67582feda03c1a0c523ee68f5c47a1c0cec73\" returns successfully" Oct 27 08:20:18.271041 kubelet[2803]: E1027 08:20:18.270901 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:18.272443 kubelet[2803]: E1027 08:20:18.272393 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:20:18.282713 kubelet[2803]: I1027 08:20:18.282596 2803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wg2gg" podStartSLOduration=42.282551344 podStartE2EDuration="42.282551344s" podCreationTimestamp="2025-10-27 08:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:20:18.28034771 +0000 UTC m=+48.314982658" watchObservedRunningTime="2025-10-27 08:20:18.282551344 +0000 UTC m=+48.317186272" Oct 27 08:20:18.308768 containerd[1611]: time="2025-10-27T08:20:18.308701405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:18.310054 containerd[1611]: time="2025-10-27T08:20:18.309824782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:20:18.310054 containerd[1611]: time="2025-10-27T08:20:18.309880697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:20:18.310668 kubelet[2803]: E1027 08:20:18.310504 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:18.310668 kubelet[2803]: E1027 08:20:18.310570 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:18.311150 kubelet[2803]: E1027 08:20:18.311089 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45kzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57d5854b59-xhcvf_calico-apiserver(7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:18.311992 containerd[1611]: time="2025-10-27T08:20:18.311702466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:20:18.314449 kubelet[2803]: E1027 08:20:18.314377 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" podUID="7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e" Oct 27 08:20:18.666394 containerd[1611]: time="2025-10-27T08:20:18.666321996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:18.667553 containerd[1611]: time="2025-10-27T08:20:18.667497202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:20:18.667609 containerd[1611]: time="2025-10-27T08:20:18.667576371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:20:18.667829 kubelet[2803]: E1027 08:20:18.667772 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:18.667874 kubelet[2803]: E1027 08:20:18.667840 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:18.668050 kubelet[2803]: E1027 08:20:18.668000 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57d5854b59-mtcd9_calico-apiserver(6dad6e5f-6112-429b-aab8-41593a07cb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:18.669216 kubelet[2803]: E1027 08:20:18.669172 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" podUID="6dad6e5f-6112-429b-aab8-41593a07cb3d" Oct 27 08:20:18.823768 systemd-networkd[1513]: calie783f464ad9: Gained IPv6LL Oct 27 08:20:19.015669 systemd-networkd[1513]: calib4ab2c33d89: Gained IPv6LL Oct 27 08:20:19.272796 kubelet[2803]: E1027 08:20:19.272329 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:19.272796 kubelet[2803]: E1027 08:20:19.272681 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" podUID="7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e" Oct 27 08:20:19.273600 kubelet[2803]: E1027 08:20:19.273572 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" podUID="6dad6e5f-6112-429b-aab8-41593a07cb3d" Oct 27 08:20:19.463872 systemd-networkd[1513]: calif0612627b03: Gained IPv6LL Oct 27 08:20:19.989407 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:42134.service - OpenSSH per-connection server daemon (10.0.0.1:42134). Oct 27 08:20:20.069726 sshd[4934]: Accepted publickey for core from 10.0.0.1 port 42134 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:20.071903 sshd-session[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:20.076963 systemd-logind[1586]: New session 10 of user core. Oct 27 08:20:20.088618 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 08:20:20.248064 sshd[4937]: Connection closed by 10.0.0.1 port 42134 Oct 27 08:20:20.248326 sshd-session[4934]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:20.252197 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:42134.service: Deactivated successfully. Oct 27 08:20:20.254302 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 08:20:20.256001 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Oct 27 08:20:20.257202 systemd-logind[1586]: Removed session 10. Oct 27 08:20:20.274509 kubelet[2803]: E1027 08:20:20.274437 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:25.263303 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:55386.service - OpenSSH per-connection server daemon (10.0.0.1:55386). Oct 27 08:20:25.318720 sshd[4961]: Accepted publickey for core from 10.0.0.1 port 55386 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:25.320179 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:25.324947 systemd-logind[1586]: New session 11 of user core. Oct 27 08:20:25.329656 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 08:20:25.444486 sshd[4964]: Connection closed by 10.0.0.1 port 55386 Oct 27 08:20:25.444901 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:25.450098 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:55386.service: Deactivated successfully. Oct 27 08:20:25.452385 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 08:20:25.453208 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Oct 27 08:20:25.454316 systemd-logind[1586]: Removed session 11. Oct 27 08:20:27.080595 containerd[1611]: time="2025-10-27T08:20:27.080442552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:20:28.084036 containerd[1611]: time="2025-10-27T08:20:28.083978669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:28.085143 containerd[1611]: time="2025-10-27T08:20:28.085101658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:20:28.085231 containerd[1611]: time="2025-10-27T08:20:28.085181732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:20:28.085387 kubelet[2803]: E1027 08:20:28.085278 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:20:28.085387 kubelet[2803]: E1027 08:20:28.085320 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:20:28.085806 kubelet[2803]: E1027 08:20:28.085521 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dctvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76bd5dfdc6-jcthj_calico-system(89b2c447-220a-4d0e-8d3d-30370d7bddf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:28.086786 kubelet[2803]: E1027 08:20:28.086722 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" podUID="89b2c447-220a-4d0e-8d3d-30370d7bddf9" Oct 27 08:20:29.079996 containerd[1611]: time="2025-10-27T08:20:29.079942171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:20:29.548911 containerd[1611]: time="2025-10-27T08:20:29.548853155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:29.550093 containerd[1611]: time="2025-10-27T08:20:29.550037431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:20:29.550324 containerd[1611]: time="2025-10-27T08:20:29.550111293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:20:29.550352 kubelet[2803]: E1027 08:20:29.550247 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:20:29.550352 kubelet[2803]: E1027 08:20:29.550304 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:20:29.550751 kubelet[2803]: E1027 08:20:29.550439 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cc141c34926d4069899023aa88fe0b1a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrhnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94557dddf-cnjrr_calico-system(0b6943bc-5290-49e7-ad2e-2226e6164e9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:29.552517 containerd[1611]: time="2025-10-27T08:20:29.552447181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:20:29.896942 containerd[1611]: time="2025-10-27T08:20:29.896785033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:29.898133 containerd[1611]: time="2025-10-27T08:20:29.898047249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:20:29.898289 containerd[1611]: time="2025-10-27T08:20:29.898152893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:20:29.898348 kubelet[2803]: E1027 08:20:29.898283 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:20:29.898502 kubelet[2803]: E1027 08:20:29.898352 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:20:29.898531 kubelet[2803]: E1027 08:20:29.898494 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrhnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94557dddf-cnjrr_calico-system(0b6943bc-5290-49e7-ad2e-2226e6164e9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:29.899755 kubelet[2803]: E1027 08:20:29.899718 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-94557dddf-cnjrr" podUID="0b6943bc-5290-49e7-ad2e-2226e6164e9a" Oct 27 08:20:30.080057 containerd[1611]: time="2025-10-27T08:20:30.079992115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:20:30.442712 containerd[1611]: time="2025-10-27T08:20:30.442646347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:30.444575 containerd[1611]: time="2025-10-27T08:20:30.444534337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:20:30.444642 containerd[1611]: time="2025-10-27T08:20:30.444573753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:20:30.444838 kubelet[2803]: E1027 08:20:30.444771 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:20:30.444838 kubelet[2803]: E1027 08:20:30.444836 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:20:30.445209 containerd[1611]: time="2025-10-27T08:20:30.445150295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:20:30.445403 kubelet[2803]: E1027 08:20:30.445180 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gbrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q42qv_calico-system(8832f7e4-0882-4808-9716-2c453d412432): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:30.446767 kubelet[2803]: E1027 08:20:30.446703 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q42qv" podUID="8832f7e4-0882-4808-9716-2c453d412432" Oct 27 08:20:30.457158 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:42530.service - OpenSSH per-connection server daemon (10.0.0.1:42530). Oct 27 08:20:30.517167 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 42530 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:30.519365 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:30.524074 systemd-logind[1586]: New session 12 of user core. Oct 27 08:20:30.534680 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 08:20:30.657489 sshd[4985]: Connection closed by 10.0.0.1 port 42530 Oct 27 08:20:30.657861 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:30.663057 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:42530.service: Deactivated successfully. Oct 27 08:20:30.665253 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 08:20:30.666056 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Oct 27 08:20:30.667306 systemd-logind[1586]: Removed session 12. Oct 27 08:20:30.824227 containerd[1611]: time="2025-10-27T08:20:30.824166319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:30.873021 containerd[1611]: time="2025-10-27T08:20:30.872945865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:20:30.873189 containerd[1611]: time="2025-10-27T08:20:30.873012534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:20:30.873394 kubelet[2803]: E1027 08:20:30.873314 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:30.873394 kubelet[2803]: E1027 08:20:30.873392 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:30.873901 kubelet[2803]: E1027 08:20:30.873571 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45kzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57d5854b59-xhcvf_calico-apiserver(7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:30.874805 kubelet[2803]: E1027 08:20:30.874734 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" podUID="7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e" Oct 27 08:20:32.079998 containerd[1611]: time="2025-10-27T08:20:32.079929277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:20:32.403046 containerd[1611]: time="2025-10-27T08:20:32.402843328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:32.404196 containerd[1611]: time="2025-10-27T08:20:32.404138844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:20:32.404279 containerd[1611]: time="2025-10-27T08:20:32.404233085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:20:32.404555 kubelet[2803]: E1027 08:20:32.404447 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:20:32.404555 kubelet[2803]: E1027 08:20:32.404554 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:20:32.404971 kubelet[2803]: E1027 08:20:32.404692 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db7b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2dvr_calico-system(35ccb1c2-1c56-4133-a090-83b933f5454f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:32.406957 containerd[1611]: time="2025-10-27T08:20:32.406911943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:20:32.736966 containerd[1611]: time="2025-10-27T08:20:32.736796127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:32.738202 containerd[1611]: time="2025-10-27T08:20:32.738130155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:20:32.738513 kubelet[2803]: E1027 08:20:32.738437 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:20:32.738588 kubelet[2803]: E1027 08:20:32.738525 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:20:32.738723 kubelet[2803]: E1027 08:20:32.738679 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db7b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2dvr_calico-system(35ccb1c2-1c56-4133-a090-83b933f5454f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:32.739999 kubelet[2803]: E1027 08:20:32.739908 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:20:32.740237 containerd[1611]: time="2025-10-27T08:20:32.738202856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:20:35.079907 containerd[1611]: time="2025-10-27T08:20:35.079856016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:20:35.521434 containerd[1611]: time="2025-10-27T08:20:35.521276675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:35.638763 containerd[1611]: time="2025-10-27T08:20:35.638681345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:20:35.638763 containerd[1611]: time="2025-10-27T08:20:35.638755417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:20:35.639047 kubelet[2803]: E1027 08:20:35.638986 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:35.639047 kubelet[2803]: E1027 08:20:35.639048 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:35.639717 kubelet[2803]: E1027 08:20:35.639194 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57d5854b59-mtcd9_calico-apiserver(6dad6e5f-6112-429b-aab8-41593a07cb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:35.640454 kubelet[2803]: E1027 08:20:35.640390 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" podUID="6dad6e5f-6112-429b-aab8-41593a07cb3d" Oct 27 08:20:35.674771 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:42542.service - OpenSSH per-connection server daemon (10.0.0.1:42542). Oct 27 08:20:35.738734 sshd[5005]: Accepted publickey for core from 10.0.0.1 port 42542 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:35.740315 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:35.744862 systemd-logind[1586]: New session 13 of user core. Oct 27 08:20:35.755954 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 08:20:35.910305 sshd[5008]: Connection closed by 10.0.0.1 port 42542 Oct 27 08:20:35.910705 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:35.915908 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:42542.service: Deactivated successfully. Oct 27 08:20:35.918162 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 08:20:35.919224 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Oct 27 08:20:35.920573 systemd-logind[1586]: Removed session 13. Oct 27 08:20:40.080221 kubelet[2803]: E1027 08:20:40.079656 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" podUID="89b2c447-220a-4d0e-8d3d-30370d7bddf9" Oct 27 08:20:40.927189 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:35610.service - OpenSSH per-connection server daemon (10.0.0.1:35610). Oct 27 08:20:40.978905 sshd[5026]: Accepted publickey for core from 10.0.0.1 port 35610 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:40.980277 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:40.984403 systemd-logind[1586]: New session 14 of user core. Oct 27 08:20:40.998646 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 08:20:41.079428 kubelet[2803]: E1027 08:20:41.079383 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" podUID="7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e" Oct 27 08:20:41.114701 sshd[5029]: Connection closed by 10.0.0.1 port 35610 Oct 27 08:20:41.115094 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:41.125948 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:35610.service: Deactivated successfully. Oct 27 08:20:41.128364 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 08:20:41.129284 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Oct 27 08:20:41.132968 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:35614.service - OpenSSH per-connection server daemon (10.0.0.1:35614). Oct 27 08:20:41.133883 systemd-logind[1586]: Removed session 14. Oct 27 08:20:41.196095 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 35614 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:41.197556 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:41.202272 systemd-logind[1586]: New session 15 of user core. Oct 27 08:20:41.209635 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 08:20:41.408449 sshd[5047]: Connection closed by 10.0.0.1 port 35614 Oct 27 08:20:41.408802 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:41.419196 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:35614.service: Deactivated successfully. Oct 27 08:20:41.421009 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 08:20:41.421809 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Oct 27 08:20:41.424845 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:35628.service - OpenSSH per-connection server daemon (10.0.0.1:35628). Oct 27 08:20:41.425557 systemd-logind[1586]: Removed session 15. Oct 27 08:20:41.483575 sshd[5058]: Accepted publickey for core from 10.0.0.1 port 35628 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:41.485071 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:41.489515 systemd-logind[1586]: New session 16 of user core. Oct 27 08:20:41.499608 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 08:20:41.639448 sshd[5061]: Connection closed by 10.0.0.1 port 35628 Oct 27 08:20:41.640362 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:41.647719 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:35628.service: Deactivated successfully. Oct 27 08:20:41.650045 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 08:20:41.650955 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Oct 27 08:20:41.652287 systemd-logind[1586]: Removed session 16. Oct 27 08:20:42.081516 kubelet[2803]: E1027 08:20:42.081460 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:43.079192 kubelet[2803]: E1027 08:20:43.079135 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q42qv" podUID="8832f7e4-0882-4808-9716-2c453d412432" Oct 27 08:20:43.475624 containerd[1611]: time="2025-10-27T08:20:43.475459445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be\" id:\"415af36f2f5c0c3deb286d83c6f6a5d96fa68d3bcda665ccb733fed9d3752116\" pid:5085 exited_at:{seconds:1761553243 nanos:474868474}" Oct 27 08:20:43.479858 kubelet[2803]: E1027 08:20:43.479826 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:44.080026 kubelet[2803]: E1027 08:20:44.079956 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-94557dddf-cnjrr" podUID="0b6943bc-5290-49e7-ad2e-2226e6164e9a" Oct 27 08:20:45.079372 kubelet[2803]: E1027 08:20:45.079198 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:20:46.661730 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:35642.service - OpenSSH per-connection server daemon (10.0.0.1:35642). Oct 27 08:20:46.732107 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 35642 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:46.733841 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:46.738299 systemd-logind[1586]: New session 17 of user core. Oct 27 08:20:46.748718 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 08:20:46.943936 sshd[5105]: Connection closed by 10.0.0.1 port 35642 Oct 27 08:20:46.944460 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:46.949355 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:35642.service: Deactivated successfully. Oct 27 08:20:46.951668 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 08:20:46.952434 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Oct 27 08:20:46.954018 systemd-logind[1586]: Removed session 17. Oct 27 08:20:48.082692 kubelet[2803]: E1027 08:20:48.082633 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" podUID="6dad6e5f-6112-429b-aab8-41593a07cb3d" Oct 27 08:20:50.079090 kubelet[2803]: E1027 08:20:50.079040 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:20:51.965099 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:56716.service - OpenSSH per-connection server daemon (10.0.0.1:56716). Oct 27 08:20:52.043146 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 56716 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:52.045247 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:52.050019 systemd-logind[1586]: New session 18 of user core. Oct 27 08:20:52.059629 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 08:20:52.192754 sshd[5127]: Connection closed by 10.0.0.1 port 56716 Oct 27 08:20:52.193107 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:52.198755 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:56716.service: Deactivated successfully. Oct 27 08:20:52.201065 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 08:20:52.201892 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Oct 27 08:20:52.203172 systemd-logind[1586]: Removed session 18. Oct 27 08:20:53.081950 containerd[1611]: time="2025-10-27T08:20:53.081903091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:20:53.532739 containerd[1611]: time="2025-10-27T08:20:53.532679127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:53.577160 containerd[1611]: time="2025-10-27T08:20:53.577091861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:20:53.577160 containerd[1611]: time="2025-10-27T08:20:53.577114985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:20:53.577489 kubelet[2803]: E1027 08:20:53.577408 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:20:53.577920 kubelet[2803]: E1027 08:20:53.577506 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:20:53.578941 kubelet[2803]: E1027 08:20:53.578859 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dctvv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76bd5dfdc6-jcthj_calico-system(89b2c447-220a-4d0e-8d3d-30370d7bddf9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:53.580139 kubelet[2803]: E1027 08:20:53.580065 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" podUID="89b2c447-220a-4d0e-8d3d-30370d7bddf9" Oct 27 08:20:56.080503 containerd[1611]: time="2025-10-27T08:20:56.080149145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:20:56.423949 containerd[1611]: time="2025-10-27T08:20:56.423782430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:56.425766 containerd[1611]: time="2025-10-27T08:20:56.425700429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:20:56.425838 containerd[1611]: time="2025-10-27T08:20:56.425746807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:20:56.426011 kubelet[2803]: E1027 08:20:56.425964 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:56.426371 kubelet[2803]: E1027 08:20:56.426023 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:20:56.426371 kubelet[2803]: E1027 08:20:56.426201 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45kzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57d5854b59-xhcvf_calico-apiserver(7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:56.427435 kubelet[2803]: E1027 08:20:56.427386 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" podUID="7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e" Oct 27 08:20:57.080037 containerd[1611]: time="2025-10-27T08:20:57.079678699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:20:57.210126 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:56728.service - OpenSSH per-connection server daemon (10.0.0.1:56728). Oct 27 08:20:57.278173 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 56728 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:20:57.279847 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:20:57.284429 systemd-logind[1586]: New session 19 of user core. Oct 27 08:20:57.303731 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 08:20:57.426697 sshd[5150]: Connection closed by 10.0.0.1 port 56728 Oct 27 08:20:57.426961 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Oct 27 08:20:57.432495 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:56728.service: Deactivated successfully. Oct 27 08:20:57.435200 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 08:20:57.436510 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Oct 27 08:20:57.438400 systemd-logind[1586]: Removed session 19. Oct 27 08:20:57.442059 containerd[1611]: time="2025-10-27T08:20:57.441998275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:57.443465 containerd[1611]: time="2025-10-27T08:20:57.443417615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:20:57.443600 containerd[1611]: time="2025-10-27T08:20:57.443543995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:20:57.443786 kubelet[2803]: E1027 08:20:57.443723 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:20:57.444114 kubelet[2803]: E1027 08:20:57.443795 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:20:57.444114 kubelet[2803]: E1027 08:20:57.443957 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cc141c34926d4069899023aa88fe0b1a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rrhnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94557dddf-cnjrr_calico-system(0b6943bc-5290-49e7-ad2e-2226e6164e9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:57.446140 containerd[1611]: time="2025-10-27T08:20:57.446103332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:20:57.817698 containerd[1611]: time="2025-10-27T08:20:57.817644388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:57.819218 containerd[1611]: time="2025-10-27T08:20:57.819167756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:20:57.819291 containerd[1611]: time="2025-10-27T08:20:57.819256414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:20:57.819481 kubelet[2803]: E1027 08:20:57.819413 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:20:57.819559 kubelet[2803]: E1027 08:20:57.819495 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:20:57.819718 kubelet[2803]: E1027 08:20:57.819660 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrhnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-94557dddf-cnjrr_calico-system(0b6943bc-5290-49e7-ad2e-2226e6164e9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:57.820910 kubelet[2803]: E1027 08:20:57.820853 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-94557dddf-cnjrr" podUID="0b6943bc-5290-49e7-ad2e-2226e6164e9a" Oct 27 08:20:58.083163 containerd[1611]: time="2025-10-27T08:20:58.082617234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:20:58.436785 containerd[1611]: time="2025-10-27T08:20:58.436634325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:58.437888 containerd[1611]: time="2025-10-27T08:20:58.437834947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:20:58.437888 containerd[1611]: time="2025-10-27T08:20:58.437875504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:20:58.438076 kubelet[2803]: E1027 08:20:58.438030 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:20:58.438148 kubelet[2803]: E1027 08:20:58.438085 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:20:58.438366 kubelet[2803]: E1027 08:20:58.438310 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db7b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2dvr_calico-system(35ccb1c2-1c56-4133-a090-83b933f5454f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:58.438583 containerd[1611]: time="2025-10-27T08:20:58.438489592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:20:58.787141 containerd[1611]: time="2025-10-27T08:20:58.787068002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:58.788595 containerd[1611]: time="2025-10-27T08:20:58.788529571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:20:58.788760 containerd[1611]: time="2025-10-27T08:20:58.788655951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:20:58.788927 kubelet[2803]: E1027 08:20:58.788874 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:20:58.789305 kubelet[2803]: E1027 08:20:58.788942 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:20:58.789419 kubelet[2803]: E1027 08:20:58.789324 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gbrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q42qv_calico-system(8832f7e4-0882-4808-9716-2c453d412432): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:58.789570 containerd[1611]: time="2025-10-27T08:20:58.789449459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:20:58.790842 kubelet[2803]: E1027 08:20:58.790598 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q42qv" podUID="8832f7e4-0882-4808-9716-2c453d412432" Oct 27 08:20:59.145396 containerd[1611]: time="2025-10-27T08:20:59.145222453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:20:59.146576 containerd[1611]: time="2025-10-27T08:20:59.146529537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:20:59.146658 containerd[1611]: time="2025-10-27T08:20:59.146590634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:20:59.146851 kubelet[2803]: E1027 08:20:59.146799 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:20:59.146929 kubelet[2803]: E1027 08:20:59.146867 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:20:59.147136 kubelet[2803]: E1027 08:20:59.147068 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db7b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2dvr_calico-system(35ccb1c2-1c56-4133-a090-83b933f5454f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:20:59.148435 kubelet[2803]: E1027 08:20:59.148371 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:21:00.079344 kubelet[2803]: E1027 08:21:00.079224 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:21:02.450790 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:50186.service - OpenSSH per-connection server daemon (10.0.0.1:50186). Oct 27 08:21:02.512961 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 50186 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:21:02.514448 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:21:02.519174 systemd-logind[1586]: New session 20 of user core. Oct 27 08:21:02.525632 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 08:21:02.639699 sshd[5168]: Connection closed by 10.0.0.1 port 50186 Oct 27 08:21:02.640320 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Oct 27 08:21:02.651009 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:50186.service: Deactivated successfully. Oct 27 08:21:02.653142 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 08:21:02.654025 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Oct 27 08:21:02.657354 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:50192.service - OpenSSH per-connection server daemon (10.0.0.1:50192). Oct 27 08:21:02.658451 systemd-logind[1586]: Removed session 20. Oct 27 08:21:02.714427 sshd[5182]: Accepted publickey for core from 10.0.0.1 port 50192 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:21:02.715927 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:21:02.720690 systemd-logind[1586]: New session 21 of user core. Oct 27 08:21:02.726631 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 08:21:03.050306 sshd[5185]: Connection closed by 10.0.0.1 port 50192 Oct 27 08:21:03.050844 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Oct 27 08:21:03.060741 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:50192.service: Deactivated successfully. Oct 27 08:21:03.063707 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 08:21:03.064944 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Oct 27 08:21:03.068866 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:50194.service - OpenSSH per-connection server daemon (10.0.0.1:50194). Oct 27 08:21:03.069942 systemd-logind[1586]: Removed session 21. Oct 27 08:21:03.079677 containerd[1611]: time="2025-10-27T08:21:03.079631800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:21:03.127558 sshd[5197]: Accepted publickey for core from 10.0.0.1 port 50194 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:21:03.129428 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:21:03.134830 systemd-logind[1586]: New session 22 of user core. Oct 27 08:21:03.143661 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 08:21:03.458424 containerd[1611]: time="2025-10-27T08:21:03.458267585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:21:03.465660 containerd[1611]: time="2025-10-27T08:21:03.465574347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:21:03.465660 containerd[1611]: time="2025-10-27T08:21:03.465639641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:21:03.465935 kubelet[2803]: E1027 08:21:03.465879 2803 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:21:03.466363 kubelet[2803]: E1027 08:21:03.465946 2803 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:21:03.466363 kubelet[2803]: E1027 08:21:03.466093 2803 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hmwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-57d5854b59-mtcd9_calico-apiserver(6dad6e5f-6112-429b-aab8-41593a07cb3d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:21:03.467336 kubelet[2803]: E1027 08:21:03.467286 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" podUID="6dad6e5f-6112-429b-aab8-41593a07cb3d" Oct 27 08:21:03.917072 sshd[5200]: Connection closed by 10.0.0.1 port 50194 Oct 27 08:21:03.919506 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Oct 27 08:21:03.926915 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:50194.service: Deactivated successfully. Oct 27 08:21:03.929304 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 08:21:03.932925 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Oct 27 08:21:03.940792 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:50208.service - OpenSSH per-connection server daemon (10.0.0.1:50208). Oct 27 08:21:03.942280 systemd-logind[1586]: Removed session 22. Oct 27 08:21:03.992205 sshd[5221]: Accepted publickey for core from 10.0.0.1 port 50208 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:21:03.993760 sshd-session[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:21:03.998698 systemd-logind[1586]: New session 23 of user core. Oct 27 08:21:04.005671 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 08:21:04.249355 sshd[5224]: Connection closed by 10.0.0.1 port 50208 Oct 27 08:21:04.250120 sshd-session[5221]: pam_unix(sshd:session): session closed for user core Oct 27 08:21:04.262278 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:50208.service: Deactivated successfully. Oct 27 08:21:04.264958 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 08:21:04.269333 systemd-logind[1586]: Session 23 logged out. Waiting for processes to exit. Oct 27 08:21:04.271618 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:50218.service - OpenSSH per-connection server daemon (10.0.0.1:50218). Oct 27 08:21:04.272769 systemd-logind[1586]: Removed session 23. Oct 27 08:21:04.326047 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 50218 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:21:04.327709 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:21:04.333363 systemd-logind[1586]: New session 24 of user core. Oct 27 08:21:04.341711 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 27 08:21:04.462132 sshd[5238]: Connection closed by 10.0.0.1 port 50218 Oct 27 08:21:04.462821 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Oct 27 08:21:04.470648 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:50218.service: Deactivated successfully. Oct 27 08:21:04.472970 systemd[1]: session-24.scope: Deactivated successfully. Oct 27 08:21:04.474033 systemd-logind[1586]: Session 24 logged out. Waiting for processes to exit. Oct 27 08:21:04.475946 systemd-logind[1586]: Removed session 24. Oct 27 08:21:05.079030 kubelet[2803]: E1027 08:21:05.078963 2803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 08:21:06.080064 kubelet[2803]: E1027 08:21:06.079873 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" podUID="89b2c447-220a-4d0e-8d3d-30370d7bddf9" Oct 27 08:21:09.479662 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:50224.service - OpenSSH per-connection server daemon (10.0.0.1:50224). Oct 27 08:21:09.535093 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 50224 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:21:09.537285 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:21:09.542592 systemd-logind[1586]: New session 25 of user core. Oct 27 08:21:09.550695 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 27 08:21:09.668406 sshd[5256]: Connection closed by 10.0.0.1 port 50224 Oct 27 08:21:09.668887 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Oct 27 08:21:09.674290 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:50224.service: Deactivated successfully. Oct 27 08:21:09.677389 systemd[1]: session-25.scope: Deactivated successfully. Oct 27 08:21:09.678427 systemd-logind[1586]: Session 25 logged out. Waiting for processes to exit. Oct 27 08:21:09.680504 systemd-logind[1586]: Removed session 25. Oct 27 08:21:11.079692 kubelet[2803]: E1027 08:21:11.079639 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q42qv" podUID="8832f7e4-0882-4808-9716-2c453d412432" Oct 27 08:21:11.080138 kubelet[2803]: E1027 08:21:11.079768 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-xhcvf" podUID="7f6d69dd-b2c4-4429-9e0c-5cd505f17f7e" Oct 27 08:21:11.080351 kubelet[2803]: E1027 08:21:11.080310 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-94557dddf-cnjrr" podUID="0b6943bc-5290-49e7-ad2e-2226e6164e9a" Oct 27 08:21:13.324148 containerd[1611]: time="2025-10-27T08:21:13.324047121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a502c1420c9f8a3340ca5bbd8444c359908645ade62a48658cf4887c9aac0be\" id:\"9420611faebc731177421c2947e6d5d90c0ed9b7de6f37d3c3e9717ed80d3f37\" pid:5282 exited_at:{seconds:1761553273 nanos:323685325}" Oct 27 08:21:14.080077 kubelet[2803]: E1027 08:21:14.080004 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2dvr" podUID="35ccb1c2-1c56-4133-a090-83b933f5454f" Oct 27 08:21:14.692643 systemd[1]: Started sshd@25-10.0.0.35:22-10.0.0.1:46498.service - OpenSSH per-connection server daemon (10.0.0.1:46498). Oct 27 08:21:14.758939 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 46498 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:21:14.760668 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:21:14.765289 systemd-logind[1586]: New session 26 of user core. Oct 27 08:21:14.772634 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 27 08:21:14.900309 sshd[5299]: Connection closed by 10.0.0.1 port 46498 Oct 27 08:21:14.900669 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Oct 27 08:21:14.905582 systemd[1]: sshd@25-10.0.0.35:22-10.0.0.1:46498.service: Deactivated successfully. Oct 27 08:21:14.907860 systemd[1]: session-26.scope: Deactivated successfully. Oct 27 08:21:14.908686 systemd-logind[1586]: Session 26 logged out. Waiting for processes to exit. Oct 27 08:21:14.910014 systemd-logind[1586]: Removed session 26. Oct 27 08:21:16.079520 kubelet[2803]: E1027 08:21:16.079308 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d5854b59-mtcd9" podUID="6dad6e5f-6112-429b-aab8-41593a07cb3d" Oct 27 08:21:19.079294 kubelet[2803]: E1027 08:21:19.079220 2803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76bd5dfdc6-jcthj" podUID="89b2c447-220a-4d0e-8d3d-30370d7bddf9" Oct 27 08:21:19.921864 systemd[1]: Started sshd@26-10.0.0.35:22-10.0.0.1:46504.service - OpenSSH per-connection server daemon (10.0.0.1:46504). Oct 27 08:21:19.981100 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 46504 ssh2: RSA SHA256:GDcu4vW3ekSV6ewDeq2XA5b2Yu5u0lv3YJ8O5CVbwa0 Oct 27 08:21:19.983244 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:21:19.988554 systemd-logind[1586]: New session 27 of user core. Oct 27 08:21:19.998645 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 27 08:21:20.123034 sshd[5319]: Connection closed by 10.0.0.1 port 46504 Oct 27 08:21:20.124714 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Oct 27 08:21:20.130384 systemd[1]: sshd@26-10.0.0.35:22-10.0.0.1:46504.service: Deactivated successfully. Oct 27 08:21:20.132833 systemd[1]: session-27.scope: Deactivated successfully. Oct 27 08:21:20.133775 systemd-logind[1586]: Session 27 logged out. Waiting for processes to exit. Oct 27 08:21:20.135370 systemd-logind[1586]: Removed session 27.